Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
An improved maximum power point tracking method for photovoltaic systems
Tafticht, T.; Agbossou, K.; Doumbia, M.L.; Cheriti, A. [Institut de recherche sur l' hydrogene, Departement de genie electrique et genie informatique, Universite du Quebec a Trois-Rivieres, C.P. 500, Trois-Rivieres (QC) (Canada)
2008-07-15
In most of the maximum power point tracking (MPPT) methods described currently in the literature, the optimal operation point of the photovoltaic (PV) systems is estimated by linear approximations. However these approximations can lead to less than optimal operating conditions and hence reduce considerably the performances of the PV system. This paper proposes a new approach to determine the maximum power point (MPP) based on measurements of the open-circuit voltage of the PV modules, and a nonlinear expression for the optimal operating voltage is developed based on this open-circuit voltage. The approach is thus a combination of the nonlinear and perturbation and observation (P and O) methods. The experimental results show that the approach improves clearly the tracking efficiency of the maximum power available at the output of the PV modules. The new method reduces the oscillations around the MPP, and increases the average efficiency of the MPPT obtained. The new MPPT method will deliver more power to any generic load or energy storage media. (author)
Improved Maximum Entropy Method with an Extended Search Space
Rothkopf, Alexander
2012-01-01
We report on an improvement to the implementation of the Maximum Entropy Method (MEM). It amounts to departing from the search space obtained through a singular value decomposition (SVD) of the Kernel. Based on the shape of the SVD basis functions we argue that the MEM spectrum for given $N_\\tau$ data-points $D(\\tau)$ and prior information $m(\\omega)$ does not in general lie in this $N_\\tau$ dimensional singular subspace. Systematically extending the search basis will eventually recover the full search space and the correct extremum. We illustrate this idea through a mock data analysis inspired by actual lattice spectra, to show where our improvement becomes essential for the success of the MEM. To remedy the shortcomings of Bryan's SVD prescription we propose to use the real Fourier basis, which consists of trigonometric functions. Not only does our approach lead to more stable numerical behavior, as the SVD is not required for the determination of the basis functions, but also the resolution of the MEM beco...
Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.
2002-01-01
laboratories comparable. The minimum and maximum concentrations of a selected isotope in naturally occurring terrestrial materials for selected chemical elements reviewed in this report are given below: Isotope Minimum mole fraction Maximum mole fraction -------------------------------------------------------------------------------- 2H 0 .000 0255 0 .000 1838 7Li 0 .9227 0 .9278 11B 0 .7961 0 .8107 13C 0 .009 629 0 .011 466 15N 0 .003 462 0 .004 210 18O 0 .001 875 0 .002 218 26Mg 0 .1099 0 .1103 30Si 0 .030 816 0 .031 023 34S 0 .0398 0 .0473 37Cl 0 .240 77 0 .243 56 44Ca 0 .020 82 0 .020 92 53Cr 0 .095 01 0 .095 53 56Fe 0 .917 42 0 .917 60 65Cu 0 .3066 0 .3102 205Tl 0 .704 72 0 .705 06 The numerical values above have uncertainties that depend upon the uncertainties of the determinations of the absolute isotope-abundance variations of reference materials of the elements. Because reference materials used for absolute isotope-abundance measurements have not been included in relative isotope abundance investigations of zinc, selenium, molybdenum, palladium, and tellurium, ranges in isotopic composition are not listed for these elements, although such ranges may be measurable with state-of-the-art mass spectrometry. This report is available at the url: http://pubs.water.usgs.gov/wri014222.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs
Italiano, Giuseppe F
2010-01-01
In this paper we study minimum cut and maximum flow problems on planar graphs, both in static and in dynamic settings. First, we present an algorithm that given an undirected planar graph computes the minimum cut between any two given vertices in O(n log log n) time. Second, we show how to achieve the same O(n log log n) bound for the problem of computing maximum flows in undirected planar graphs. To the best of our knowledge, these are the first algorithms for those two problems that break the O(n log n) barrier, which has been standing for more than 25 years. Third, we present a fully dynamic algorithm that is able to maintain information about minimum cuts and maximum flows in a plane graph (i.e., a planar graph with a fixed embedding): our algorithm is able to insert edges, delete edges and answer min-cut and max-flow queries between any pair of vertices in O(n^(2/3) log^3 n) time per operation. This result is based on a new dynamic shortest path algorithm for planar graphs which may be of independent int...
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
A Maximum Power Tracker for Improved Thermophotovoltaic Power Generation Project
National Aeronautics and Space Administration — Radioisotope Power Systems (RPS) are critical for future flagship exploration missions in space and on planetary surfaces. Small improvements in the RPS performance,...
An Improved Maximum C/I Scheduling Algorithm Combined with HARQ
无
2003-01-01
It is well known that traffic in downlink will be much greater than that in uplink in 3 G and that beyond. High Speed Downlink Packet Access(HSDPA) is the solution to transmission for high-speed downlink packet service in UMTS, of which Maximum C/I scheduling is one of the important algorithms related to performance enhancement. An improved scheme, Thorough Maximum C/I scheduling algorithm, is presented in this article, in which every transmitted frame has the maximum C/I. The simulation results show that the new Maximum C/I scheme outperforms the conventional scheme in throughput performance and delay performance, and that the FER decreases faster as the maximum number of the retransmission increases.
Night vision image fusion for target detection with improved 2D maximum entropy segmentation
Bai, Lian-fa; Liu, Ying-bin; Yue, Jiang; Zhang, Yi
2013-08-01
Infrared and LLL image are used for night vision target detection. In allusion to the characteristics of night vision imaging and lack of traditional detection algorithm for segmentation and extraction of targets, we propose a method of infrared and LLL image fusion for target detection with improved 2D maximum entropy segmentation. Firstly, two-dimensional histogram was improved by gray level and maximum gray level in weighted area, weights were selected to calculate the maximum entropy for infrared and LLL image segmentation by using the histogram. Compared with the traditional maximum entropy segmentation, the algorithm had significant effect in target detection, and the functions of background suppression and target extraction. And then, the validity of multi-dimensional characteristics AND operation on the infrared and LLL image feature level fusion for target detection is verified. Experimental results show that detection algorithm has a relatively good effect and application in target detection and multiple targets detection in complex background.
An improved maximum power point tracking method for a photovoltaic system
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
Woonki Na
2017-03-01
Full Text Available This paper presents an improved maximum power point tracking (MPPT algorithm using a fuzzy logic controller (FLC in order to extract potential maximum power from photovoltaic cells. The objectives of the proposed algorithm are to improve the tracking speed, and to simultaneously solve the inherent drawbacks such as slow tracking in the conventional perturb and observe (P and O algorithm. The performances of the conventional P and O algorithm and the proposed algorithm are compared by using MATLAB/Simulink in terms of the tracking speed and steady-state oscillations. Additionally, both algorithms were experimentally validated through a digital signal processor (DSP-based controlled-boost DC-DC converter. The experimental results show that the proposed algorithm performs with a shorter tracking time, smaller output power oscillation, and higher efficiency, compared with the conventional P and O algorithm.
Galili, Tal; Meilijson, Isaac
2016-01-02
The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].
Galili, Tal; Meilijson, Isaac
2016-01-01
The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.] PMID:27499547
Chronic eccentric cycling improves quadriceps muscle structure and maximum cycling power.
Leong, C H; McDermott, W J; Elmer, S J; Martin, J C
2014-06-01
An interesting finding from eccentric exercise training interventions is the presence of muscle hypertrophy without changes in maximum concentric strength and/or power. The lack of improvements in concentric strength and/or power could be due to long lasting suppressive effects on muscle force production following eccentric training. Thus, improvements in concentric strength and/or power might not be detected until muscle tissue has recovered (e. g., several weeks post-training). We evaluated alterations in muscular structure (rectus-femoris, RF, and vastus lateralis, VL, thickness and pennation angles) and maximum concentric cycling power (Pmax) 1-week following 8-weeks of eccentric cycling training (2×/week; 5-10.5 min; 20-55% of Pmax). Pmax was assessed again at 8-weeks post-training. At 1 week post-training, RF and VL thickness increased by 24±4% and 13±2%, respectively, and RF and VL pennation angles increased by 31±4% and 13±1%, respectively (all Peccentric cycling can be a time-effective intervention for improving muscular structure and function in the lower body of healthy individuals. The larger Pmax increase detected at 8-weeks post-training implies that sufficient recovery might be necessary to fully detect changes in muscular power after eccentric cycling training.
Improved Reliability of Single-Phase PV Inverters by Limiting the Maximum Feed-in Power
Yang, Yongheng; Wang, Huai; Blaabjerg, Frede
2014-01-01
. The CPG control strategy is activated only when the DC input power from PV panels exceeds a specific power limit. It enables to limit the maximum feed-in power to the electric grids and also to improve the utilization of PV inverters. As a further study, this paper investigates the reliability performance...... of the power devices (e.g. IGBTs) used in PV inverters with the CPG control under different feed-in power limits. A long-term mission profile (i.e. solar irradiance and ambient temperature) based stress analysis approach is extended and applied to obtain the yearly electrical and thermal stresses of the power...
Blandino, Rémi; Etesse, Jean; Grangier, Philippe [Laboratoire Charles Fabry, Institut d' Optique, CNRS, Université Paris-Sud, 2 avenue Augustin Fresnel, 91127 Palaiseau Cedex (France); Leverrier, Anthony [Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland and INRIA Paris-Rocquencourt, 78153 Le Chesnay Cedex (France); Barbieri, Marco [Laboratoire Charles Fabry, Institut d' Optique, CNRS, Université Paris-Sud, 2 avenue Augustin Fresnel, 91127 Palaiseau Cedex, France and Clarendon Laboratory, Department of Physics, University of Oxford, OX1 3PU (United Kingdom); Tualle-Brouri, Rosa [Laboratoire Charles Fabry, Institut d' Optique, CNRS, Université Paris-Sud, 2 avenue Augustin Fresnel, 91127 Palaiseau Cedex, France and Institut Universitaire de France, 103 boulevard St. Michel, 75005, Paris (France)
2014-12-04
We show that the maximum transmission distance of continuous-variable quantum key distribution in presence of a Gaussian noisy lossy channel can be arbitrarily increased using a heralded noiseless linear amplifier. We explicitly consider a protocol using amplitude and phase modulated coherent states with reverse reconciliation. Assuming that the secret key rate drops to zero for a line transmittance T{sub lim}, we find that a noiseless amplifier with amplitude gain g can improve this value to T{sub lim}/g{sup 2}, corresponding to an increase in distance proportional to log g. We also show that the tolerance against noise is increased.
Valuing option on the maximum of two assets using improving modified Gauss-Seidel method
Koh, Wei Sin; Muthuvalu, Mohana Sundaram; Aruchunan, Elayaraja; Sulaiman, Jumat
2014-07-01
This paper presents the numerical solution for the option on the maximum of two assets using Improving Modified Gauss-Seidel (IMGS) iterative method. Actually, this option can be governed by two-dimensional Black-Scholes partial differential equation (PDE). The Crank-Nicolson scheme is applied to discretize the Black-Scholes PDE in order to derive a linear system. Then, the IMGS iterative method is formulated to solve the linear system. Numerical experiments involving Gauss-Seidel (GS) and Modified Gauss-Seidel (MGS) iterative methods are implemented as control methods to test the computational efficiency of the IMGS iterative method.
Improved incremental conductance method for maximum power point tracking using cuk converter
M. Saad Saoud
2014-03-01
Full Text Available The Algerian government relies on a strategy focused on the development of inexhaustible resources such as solar and uses to diversify energy sources and prepare the Algeria of tomorrow: about 40% of the production of electricity for domestic consumption will be from renewable sources by 2030, Therefore it is necessary to concentrate our forces in order to reduce the application costs and to increment their performances, Their performance is evaluated and compared through theoretical analysis and digital simulation. This paper presents simulation of improved incremental conductance method for maximum power point tracking (MPPT using DC-DC cuk converter. This improved algorithm is used to track MPPs because it performs precise control under rapidly changing Atmospheric conditions, Matlab/ Simulink were employed for simulation studies.
Improvement of the detector resolution in X-ray spectrometry by using the maximum entropy method
Fernández, Jorge E.; Scot, Viviana; Giulio, Eugenio Di; Sabbatucci, Lorenzo
2015-11-01
In every X-ray spectroscopy measurement the influence of the detection system causes loss of information. Different mechanisms contribute to form the so-called detector response function (DRF): the detector efficiency, the escape of photons as a consequence of photoelectric or scattering interactions, the spectrum smearing due to the energy resolution, and, in solid states detectors (SSD), the charge collection artifacts. To recover the original spectrum, it is necessary to remove the detector influence by solving the so-called inverse problem. The maximum entropy unfolding technique solves this problem by imposing a set of constraints, taking advantage of the known a priori information and preserving the positive-defined character of the X-ray spectrum. This method has been included in the tool UMESTRAT (Unfolding Maximum Entropy STRATegy), which adopts a semi-automatic strategy to solve the unfolding problem based on a suitable combination of the codes MAXED and GRAVEL, developed at PTB. In the past UMESTRAT proved the capability to resolve characteristic peaks which were revealed as overlapped by a Si SSD, giving good qualitative results. In order to obtain quantitative results, UMESTRAT has been modified to include the additional constraint of the total number of photons of the spectrum, which can be easily determined by inverting the diagonal efficiency matrix. The features of the improved code are illustrated with some examples of unfolding from three commonly used SSD like Si, Ge, and CdTe. The quantitative unfolding can be considered as a software improvement of the detector resolution.
Dorothy Cimino Brown
2012-01-01
Full Text Available The 2008 World Health Organization World Cancer Report describes global cancer incidence soaring with many patients living in countries that lack resources for cancer control. Alternative treatment strategies that can reduce the global disease burden at manageable costs must be developed. Polysaccharopeptide (PSP is the bioactive agent from the mushroom Coriolus versicolor. Studies indicate PSP has in vitro antitumor activities and inhibits the growth of induced tumors in animal models. Clear evidence of clinically relevant benefits of PSP in cancer patients, however, is lacking. The investment of resources required to complete large-scale, randomized controlled trials of PSP in cancer patients is more easily justified if antitumor and survival benefits are documented in a complex animal model of a naturally occurring cancer that parallels human disease. Because of its high metastatic rate and vascular origin, canine hemangiosarcoma is used for investigations in antimetastatic and antiangiogenic therapies. In this double-blind randomized multidose pilot study, high-dose PSP significantly delayed the progression of metastases and afforded the longest survival times reported in canine hemangiosarcoma. These data suggest that, for those cancer patients for whom advanced treatments are not accessible, PSP as a single agent might offer significant improvements in morbidity and mortality.
Ultrasonic Imaging Using a Flexible Array: Improvements to the Maximum Contrast Autofocus Algorithm
Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.
2009-03-01
In previous work, we have presented the maximum contrast autofocus algorithm for estimating unknown imaging parameters, e.g., for imaging through complicated surfaces using a flexible ultrasonic array. This paper details recent improvements to the algorithm. The algorithm operates by maximizing the image contrast metric with respect to the imaging parameters. For a flexible array, the relative positions of the array elements are parameterized using a cubic spline function and the spline control points are estimated by iterative maximisation of the image contrast via simulated annealing. The resultant spline gives an estimate of the array geometry and the profile of the surface that it has conformed to, allowing the generation of a well-focused image. A pre-processing step is introduced to obtain an initial estimate of the array geometry, reducing the time taken for the algorithm to convergence. Experimental results are demonstrated using a flexible array prototype.
A maximum noise fraction transform with improved noise estimation for hyperspectral images
LIU Xiang; ZHANG Bing; GAO LianRu; CHEN DongMei
2009-01-01
Feature extraction is often performed to reduce spectral dimension of hyperspectral images before image classification.The maximum noise fraction (MNF) transform is one of the most commonly used spectral feature extraction methods.The spectral features in several bands of hyperspectral images are submerged by the noise.The MNF transform is advantageous over the principle component (PC) transform because it takes the noise information in the spatial domain into consideration.However,the experiments described in this paper demonstrate that classification accuracy is greatly influenced by the MNF transform when the ground objects are mixed together.The underlying mechanism of it is revealed and analyzed by mathematical theory.In order to improve the performance of classification after feature extraction when ground objects are mixed in hyperspectral images,a new MNF transform,with an Improved method of estimating hyperspectral Image noise covariance matrix (NCM),is presented.This improved MNF transform is applied to both the simulated data and real data.The results show that compared with the classical MNF transform,this new method enhanced the ability of feature extraction and increased classification accuracy.
The optical synthetic aperture image restoration based on the improved maximum-likelihood algorithm
Geng, Zexun; Xu, Qing; Zhang, Baoming; Gong, Zhihui
2012-09-01
Optical synthetic aperture imaging (OSAI) can be envisaged in the future for improving the image resolution from high altitude orbits. Several future projects are based on optical synthetic aperture for science or earth observation. Comparing with equivalent monolithic telescopes, however, the partly filled aperture of OSAI induces the attenuation of the modulation transfer function of the system. Consequently, images acquired by OSAI instrument have to be post-processed to restore ones equivalent in resolution to that of a single filled aperture. The maximum-likelihood (ML) algorithm proposed by Benvenuto performed better than traditional Wiener filter did, but it didn't work stably and the point spread function (PSF), was assumed to be known and unchanged in iterative restoration. In fact, the PSF is unknown in most cases, and its estimation was expected to be updated alternatively in optimization. Facing these limitations of this method, an improved ML (IML) reconstruction algorithm was proposed in this paper, which incorporated PSF estimation by means of parameter identification into ML, and updated the PSF successively during iteration. Accordingly, the IML algorithm converged stably and reached better results. Experiment results showed that the proposed algorithm performed much better than ML did in peak signal to noise ratio, mean square error and the average contrast evaluation indexes.
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
Jia, Feng; Lei, Yaguo; Shan, Hongkai; Lin, Jing
2015-01-01
The early fault characteristics of rolling element bearings carried by vibration signals are quite weak because the signals are generally masked by heavy background noise. To extract the weak fault characteristics of bearings from the signals, an improved spectral kurtosis (SK) method is proposed based on maximum correlated kurtosis deconvolution (MCKD). The proposed method combines the ability of MCKD in indicating the periodic fault transients and the ability of SK in locating these transients in the frequency domain. A simulation signal overwhelmed by heavy noise is used to demonstrate the effectiveness of the proposed method. The results show that MCKD is beneficial to clarify the periodic impulse components of the bearing signals, and the method is able to detect the resonant frequency band of the signal and extract its fault characteristic frequency. Through analyzing actual vibration signals collected from wind turbines and hot strip rolling mills, we confirm that by using the proposed method, it is possible to extract fault characteristics and diagnose early faults of rolling element bearings. Based on the comparisons with the SK method, it is verified that the proposed method is more suitable to diagnose early faults of rolling element bearings. PMID:26610501
Iden, Sascha; Peters, Andre; Durner, Wolfgang
2017-04-01
Soil hydraulic properties are required to solve the Richards equation, the most widely applied model for variably-saturated flow. While the experimental determination of the water retention curve does not pose significant challenges, the measurement of unsaturated hydraulic conductivity is time consuming and costly. The prediction of the unsaturated hydraulic conductivity curve from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. A well-known problem of conductivity prediction for retention functions with wide pore-size distributions is the sharp drop in conductivity close to water saturation. This problematic behavior is well known for the van Genuchten model if the shape parameter n assumes values smaller than about 1.3. So far, the workaround for this artefact has been to introduce an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable and thus a discontinuous water capacity function. We present an improved parametrization of the hydraulic properties which uses the original capillary saturation function and introduces a maximum pore radius only in the pore-bundle model. Closed-form equations for the hydraulic conductivity function were derived for the unimodal and multimodal retention functions of van Genuchten and have been tested by sensitivity analysis and applied in curve fitting and inverse modeling of multistep outflow experiments. The resulting hydraulic conductivity function is smooth, increases monotonically close to saturation, and eliminates the sharp drop in conductivity close to saturation. Furthermore, the new model retains the smoothness and continuous differentiability of the water retention curve. We conclude that the resulting soil hydraulic functions are physically more reasonable than the ones predicted by previous approaches, and are thus ideally suited for numerical simulations
Improving patient safety: how and why incidences occur in nursing care
Maria Cecilia Toffoletto
2013-10-01
Full Text Available The present investigation was a cross-sectional, quantitative research study analyzing incidents associated with nursing care using a root-cause methodological analysis. The study was conducted in a public hospital intensive care unit (ICU in Santiago de Chile and investigated 18 incidents related to nursing care that occurred from January to March of 2012. The sample was composed of six cases involving medications and the self-removal of therapeutic devices. The contributing factors were related to the tasks and technology, the professional work team, the patients, and the environment. The analysis confirmed that the cases presented with similar contributing factors, thereby indicating that the vulnerable aspects of the system are primarily responsible for the incidence occurrence. We conclude that root-cause analysis facilitates the identification of these vulnerable points. Proactive management in system-error prevention is made possible by recommendations.
Blandino, Rémi; Barbieri, Marco; Etesse, Jean; Grangier, Philippe; Tualle-Brouri, Rosa
2012-01-01
We show that the maximum transmission distance of continuous-variable quantum key distribution in presence of a Gaussian noisy lossy channel can be arbitrarily increased using a linear noiseless amplifier. We explicitly consider a protocol using amplitude and phase modulated coherent states with reverse reconciliation. We find that a noiseless amplifier with amplitude gain g can increase the maximum admissible losses by a factor 1/g^2.
Maximum energy output of a DFIG wind turbine using an improved MPPT-curve method
Dinh-Chung Phan; Shigeru Yamamoto
2015-01-01
A new method is proposed for obtaining the maximum power output of a doubly-fed induction generator (DFIG) wind turbine to control the rotor- and grid-side converters. The efficiency of maximum power point tracking that is obtained by the proposed method is theoretically guaranteed under assumptions that represent physical conditions. Several control parameters may be adjusted to ensure the quality of control performance. In particular, a DFIG state-space model and a control technique based o...
Improving Pre-emptive Prescribing to Relieve Patient Discomfort Occurring Out of Hours.
Williams, Rhys; Herbert, Fiona; Orme, Amy; Casswell, Georgina
2016-01-01
Junior doctors are commonly asked to prescribe simple medications for symptom relief for patients out of hours. Unfortunately, time constraints and other pressures may lead to delays before the medications are prescribed. A quality improvement project was conducted at a large university teaching hospital to establish the extent of the problem, with the aim of finding measures to improve preemptive prescribing for patients. Baseline data was gathered over three busy wards to calculate the total of new prescriptions made over the course of a weekend. There were 24 new prescriptions required over the weekend, a percentage increase of 14.9% compared to the existing prescriptions on a Friday. Following the first intervention this decreased to 10.2%, and by the second intervention the rate was 4.9%. Data collected several months later confirmed that the interventions remained successful, and preemptive prescribing continued. Overall, our interventions have shown that the number of new prescriptions required out of hours can be reduced by educating junior doctors on preemptive prescribing.
Andreas eMalangre
2016-03-01
Full Text Available Nocturnal sleep effects on memory consolidation following gross motor sequence learning were examined using a complex arm movement task. This task required participants to produce non-regular spatial patterns in the horizontal plane by successively fitting a small peg into different target-holes on an electronic pegboard. The respective reaching movements typically differed in amplitude and direction. Targets were visualized prior to each transport movement on a computer screen. With this task we tested 18 subjects (22.6 +/- 1.9 years; 8 female using a between-subjects design. Participants initially learned a 10-element arm movement sequence either in the morning or in the evening. Performance was retested under free recall requirements 15 minutes post training, as well as 12 hrs and 24 hrs later. Thus each group was provided with one sleep-filled and one wake retention interval. Dependent variables were error rate (number of erroneous sequences and average sequence execution time (correct sequences only. Performance improved during acquisition. Error rate remained stable across retention. Sequence execution time (inverse to execution speed significantly decreased again during the sleep-filled retention intervals, but remained stable during the respective wake intervals. These results corroborate recent findings on sleep-related enhancement consolidation in ecological valid, complex gross motor tasks. At the same time they suggest this effect to be truly memory-based and independent from repeated access to extrinsic sequence information during retests.
Kiviet, J.F.; Phillips, G.D.A.
2014-01-01
In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard
Improved Determination of the Location of the Temperature Maximum in the Corona
Lemaire, J. F.; Stegen, K.
2016-12-01
The most used method to calculate the coronal electron temperature [Te (r)] from a coronal density distribution [ne (r)] is the scale-height method (SHM). We introduce a novel method that is a generalization of a method introduced by Alfvén ( Ark. Mat. Astron. Fys. 27, 1, 1941) to calculate Te(r) for a corona in hydrostatic equilibrium: the "HST" method. All of the methods discussed here require given electron-density distributions [ne (r)] which can be derived from white-light (WL) eclipse observations. The new "DYN" method determines the unique solution of Te(r) for which Te(r → ∞) → 0 when the solar corona expands radially as realized in hydrodynamical solar-wind models. The applications of the SHM method and DYN method give comparable distributions for Te(r). Both have a maximum [T_{max}] whose value ranges between 1 - 3 MK. However, the peak of temperature is located at a different altitude in both cases. Close to the Sun where the expansion velocity is subsonic (r < 1.3 R_{⊙}) the DYN method gives the same results as the HST method. The effects of the other free parameters on the DYN temperature distribution are presented in the last part of this study. Our DYN method is a new tool to evaluate the range of altitudes where the heating rate is maximum in the solar corona when the electron-density distribution is obtained from WL coronal observations.
Maximum Energy Output of a DFIG Wind Turbine Using an Improved MPPT-Curve Method
Dinh-Chung Phan
2015-10-01
Full Text Available A new method is proposed for obtaining the maximum power output of a doubly-fed induction generator (DFIG wind turbine to control the rotor- and grid-side converters. The efficiency of maximum power point tracking that is obtained by the proposed method is theoretically guaranteed under assumptions that represent physical conditions. Several control parameters may be adjusted to ensure the quality of control performance. In particular, a DFIG state-space model and a control technique based on the Lyapunov function are adopted to derive the control method. The effectiveness of the proposed method is verified via numerical simulations of a 1.5-MW DFIG wind turbine using MATLAB/Simulink. The simulation results show that when the proposed method is used, the wind turbine is capable of properly tracking the optimal operation point; furthermore, the generator’s available energy output is higher when the proposed method is used than it is when the conventional method is used instead.
Maximum-entropy weak lens reconstruction improved methods and application to data
Marshall, P J; Gull, S F; Bridle, S L
2002-01-01
We develop the maximum-entropy weak shear mass reconstruction method presented in earlier papers by taking each background galaxy image shape as an independent estimator of the reduced shear field and incorporating an intrinsic smoothness into the reconstruction. The characteristic length scale of this smoothing is determined by Bayesian methods. Within this algorithm the uncertainties due to the intrinsic distribution of galaxy shapes are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures can be calculated with corresponding uncertainties. We apply this method to two clusters taken from N-body simulations using mock observations corresponding to Keck LRIS and mosaiced HST WFPC2 fields. We demonstrate that the Bayesian choice of smoothing length is sensible and that masses within apertures (including one on a filamentary structure) are reliable. We apply the method to data taken on the cluster MS1054-03 using the Keck LRIS (Clowe et al. 2000) and HST (Hoekstra e...
Wang, Xiao; Vignjevic, Marija; Liu, Fulai
2015-01-01
Plants of spring wheat (Triticum aestivum L. cv. Vinjett) were exposed to moderate water deficit at the vegetative growth stages six-leaf and/or stem elongation to investigate drought priming effects on tolerance to drought and heat stress events occurring during the grain filling stage. Compared......Plants of spring wheat (Triticum aestivum L. cv. Vinjett) were exposed to moderate water deficit at the vegetative growth stages six-leaf and/or stem elongation to investigate drought priming effects on tolerance to drought and heat stress events occurring during the grain filling stage....... Comparedwith the non-primed plants, drought priming could alleviate photo-inhibition in flag leaves caused by drought and heat stress episodes during grain filling. In the primed plants, drought stress inhibited photosynthesis mainly through decrease of maximum photosynthetic electron transport rate, while...... decrease of the carboxylation efficiency limited photosynthesis under heat stress. The higher saturated net photosynthetic rate of flag leaves coincidedwith the lowered nonphotochemical quenching rates in the twice-primed plants under drought stress and in the primed plants during stem elongation under...
Iden, Sascha C.; Peters, Andre; Durner, Wolfgang
2015-11-01
The prediction of unsaturated hydraulic conductivity from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. One problem for conductivity predictions from retention functions with continuous derivatives, i.e. continuous water capacity functions, is that the hydraulic conductivity curve exhibits a sharp drop close to water saturation if the pore-size distribution is wide. So far this artifact has been ignored or removed by introducing an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable. We present a new parameterization of the hydraulic properties which uses the original saturation function (e.g. of van Genuchten) and introduces a maximum pore radius only in the pore-bundle model. In contrast to models using an explicit air entry, the resulting conductivity function is smooth and increases monotonically close to saturation. The model concept can easily be applied to any combination of retention curve and pore-bundle model. We derive closed-form expressions for the unimodal and multimodal van Genuchten-Mualem models and apply the model concept to curve fitting and inverse modeling of a transient outflow experiment. Since the new model retains the smoothness and continuous differentiability of the retention model and eliminates the sharp drop in conductivity close to saturation, the resulting hydraulic functions are physically more reasonable and ideal for numerical simulations with the Richards equation or multiphase flow models.
Improved Determination of the Location of the Temperature Maximum in the Corona
Lemaire, J. F.; Stegen, K.
2016-10-01
The most used method to calculate the coronal electron temperature [ Te (r)] from a coronal density distribution [ ne (r)] is the scale-height method (SHM). We introduce a novel method that is a generalization of a method introduced by Alfvén (Ark. Mat. Astron. Fys. 27, 1, 1941) to calculate Te(r) for a corona in hydrostatic equilibrium: the "HST" method. All of the methods discussed here require given electron-density distributions [ ne (r)] which can be derived from white-light (WL) eclipse observations. The new "DYN" method determines the unique solution of Te(r) for which Te(r → ∞) → 0 when the solar corona expands radially as realized in hydrodynamical solar-wind models. The applications of the SHM method and DYN method give comparable distributions for Te(r). Both have a maximum [ T_{max}] whose value ranges between 1 - 3 MK. However, the peak of temperature is located at a different altitude in both cases. Close to the Sun where the expansion velocity is subsonic ( r corona when the electron-density distribution is obtained from WL coronal observations.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-02-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Haijing Niu; Ping Guo; Xiaodong Song; Tianzi Jiang
2008-01-01
The sensitivity of diffuse optical tomography (DOT) imaging exponentially decreases with the increase of photon penetration depth, which leads to a poor depth resolution for DOT. In this letter, an exponential adjustment method (EAM) based on maximum singular value of layered sensitivity is proposed. Optimal depth resolution can be achieved by compensating the reduced sensitivity in the deep medium. Simulations are performed using a semi-infinite model and the simulation results show that the EAM method can substantially improve the depth resolution of deeply embedded objects in the medium. Consequently, the image quality and the reconstruction accuracy for these objects have been largely improved.
Tehsin, Sara; Rehman, Saad; Awan, Ahmad B.; Chaudry, Qaiser; Abbas, Muhammad; Young, Rupert; Asif, Afia
2016-04-01
Sensitivity to the variations in the reference image is a major concern when recognizing target objects. A combinational framework of correlation filters and logarithmic transformation has been previously reported to resolve this issue alongside catering for scale and rotation changes of the object in the presence of distortion and noise. In this paper, we have extended the work to include the influence of different logarithmic bases on the resultant correlation plane. The meaningful changes in correlation parameters along with contraction/expansion in the correlation plane peak have been identified under different scenarios. Based on our research, we propose some specific log bases to be used in logarithmically transformed correlation filters for achieving suitable tolerance to different variations. The study is based upon testing a range of logarithmic bases for different situations and finding an optimal logarithmic base for each particular set of distortions. Our results show improved correlation and target detection accuracies.
Tareef K. Mustafa
2010-01-01
Full Text Available Problem statement: Stylometric authorship attribution is an approach concerned about analyzing texts in text mining, e.g., novels and plays that famous authors wrote, trying to measure the authors style, by choosing some attributes that shows the author style of writing, assuming that these writers have a special way of writing that no other writer has; thus, authorship attribution is the task of identifying the author of a given text. In this study, we propose an authorship attribution algorithm, improving the accuracy of Stylometric features of different professionals so it can be discriminated nearly as well as fingerprints of different persons using authorship attributes. Approach: The main target in this study is to build an algorithm supports a decision making systems enables users to predict and choose the right author for a specific anonymous author's novel under consideration, by using a learning procedure to teach the system the Stylometric map of the author and behave as an expert opinion. The Stylometric Authorship Attribution (AA usually depends on the frequent word as the best attribute that could be used, many studies strived for other beneficiary attributes, still the frequent word is ahead of other attributes that gives better results in the researches and experiments and still the best parameter and technique that's been used till now is the counting of the bag-of-word with the maximum item set. Results: To improve the techniques of the AA, we need to use new pack of attributes with a new measurement tool, the first pack of attributes we are using in this study is the (frequent pair which means a pair of words that always appear together, this attribute clearly is not a new one, but it wasn't a successive attribute compared with the frequent word, using the maximum item set counters. the words pair made some mistakes as we see in the experiment results, improving the winnow algorithm by combining it with the computational
Higuita Cano, Mauricio; Mousli, Mohamed Islam Aniss; Kelouwani, Sousso; Agbossou, Kodjo; Hammoudi, Mhamed; Dubé, Yves
2017-03-01
This work investigates the design and validation of a fuel cell management system (FCMS) which can perform when the fuel cell is at water freezing temperature. This FCMS is based on a new tracking technique with intelligent prediction, which combined the Maximum Efficiency Point Tracking with variable perturbation-current step and the fuzzy logic technique (MEPT-FL). Unlike conventional fuel cell control systems, our proposed FCMS considers the cold-weather conditions, the reduction of fuel cell set-point oscillations. In addition, the FCMS is built to respond quickly and effectively to the variations of electric load. A temperature controller stage is designed in conjunction with the MEPT-FL in order to operate the FC at low-temperature values whilst tracking at the same time the maximum efficiency point. The simulation results have as well experimental validation suggest that propose approach is effective and can achieve an average efficiency improvement up to 8%. The MEPT-FL is validated using a Proton Exchange Membrane Fuel Cell (PEMFC) of 500 W.
Bosmans, H.; Verbeeck, R.; Vandermeulen, D.; Suetens, P.; Wilms, G.; Maaly, M.; Marchal, G.; Baert, A.L. [Louvain Univ. (Belgium)
1995-12-01
The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final `background region` whereas cortical blood vessels and all brain tissues are included in the `brain region`. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms.
Takagi, Hiroshi; Wu, Wenjie
2016-03-01
Even though the maximum wind radius (Rmax) is an important parameter in determining the intensity and size of tropical cyclones, it has been overlooked in previous storm surge studies. This study reviews the existing estimation methods for Rmax based on central pressure or maximum wind speed. These over- or underestimate Rmax because of substantial variations in the data, although an average radius can be estimated with moderate accuracy. As an alternative, we propose an Rmax estimation method based on the radius of the 50 kt wind (R50). Data obtained by a meteorological station network in the Japanese archipelago during the passage of strong typhoons, together with the JMA typhoon best track data for 1990-2013, enabled us to derive the following simple equation, Rmax = 0.23 R50. Application to a recent strong typhoon, the 2015 Typhoon Goni, confirms that the equation provides a good estimation of Rmax, particularly when the central pressure became considerably low. Although this new method substantially improves the estimation of Rmax compared to the existing models, estimation errors are unavoidable because of fundamental uncertainties regarding the typhoon's structure or insufficient number of available typhoon data. In fact, a numerical simulation for the 2013 Typhoon Haiyan as well as 2015 Typhoon Goni demonstrates a substantial difference in the storm surge height for different Rmax. Therefore, the variability of Rmax should be taken into account in storm surge simulations (e.g., Rmax = 0.15 R50-0.35 R50), independently of the model used, to minimize the risk of over- or underestimating storm surges. The proposed method is expected to increase the predictability of major storm surges and to contribute to disaster risk management, particularly in the western North Pacific, including countries such as Japan, China, Taiwan, the Philippines, and Vietnam.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.
A. P. Tran
2013-07-01
Full Text Available The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.
Sprafkin, Joyce; Mattison, Richard E.; Gadow, Kenneth D.; Schneider, Jayne; Lavigne, John V.
2011-01-01
Objective: To examine the psychometric properties of the 30-item teacher's version of the Child and Adolescent Symptom Inventory Progress Monitor (CASI-PM-T), a "DSM-IV"-referenced rating scale for monitoring change in ADHD and co-occurring symptoms in youths receiving behavioral or pharmacological interventions. Method: Three separate studies…
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Kakade, Rohan; Walker, John G.; Phillips, Andrew J.
2016-08-01
Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.
Juin-Ling Tseng
2016-01-01
Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.
最大团问题的改进蚁群算法求解%Improved Ant Colony Algorithm for Maximum Clique Problem
陈荣
2011-01-01
为了更好的解决最大团问题,提出一种改进的蚁群算法.通过提取图的顶点信息,将图用信息素模型来表示;根据最大团问题的约束条件利用蚁群构造极大团,并进行实时的全局信息素更新和局部信息素更新,直到找到最大团.实验结果表明,算法能较好的实现最大团问题,算法性能高于通用的蚁群算法.%In order to solve maximum clique problem better, an improved ant colony algorithm is proposed. By extracting vertex information of the graph, the graph is represented by pheromone trail.According to the constraints of maximum clique problem, larger clique is constructed by ant colony, and updating global pheromone information and local pheromone information real- time, until finding the maximum clique. Experimental results show that this method can achieve maximum clique problem and the performance is higher than the common ant colony algorithms.
Vanavil, B.; Krishna Chaitanya, K.; Seshagiri Rao, A.
2015-06-01
In this paper, a proportional-integral-derivative controller in series with a lead-lag filter is designed for control of the open-loop unstable processes with time delay based on direct synthesis method. Study of the performance of the designed controllers has been carried out on various unstable processes. Set-point weighting is considered to reduce the undesirable overshoot. The proposed scheme consists of only one tuning parameter, and systematic guidelines are provided for selection of the tuning parameter based on the peak value of the sensitivity function (Ms). Robustness analysis has been carried out based on sensitivity and complementary sensitivity functions. Nominal and robust control performances are achieved with the proposed method and improved closed-loop performances are obtained when compared to the recently reported methods in the literature.
张俊红; 魏学业; 谷建柱; 王立华
2013-01-01
In order to improve the conversion efficiency of photovoltaic cells, this paper proposed a improved variable step size and power prediction combined with perturbation and observation method based on the mathematic model of photovoltaic array, in view of the traditional fixed step perturbation and observation method which existed the oscillation phenomenon and false phenomenon to achieve maximum power point tracking. The oscillation and misjudgment problem was eliminated by using the approximate gradient method instead of optimal gradient method and using power prediction method of multiple characteristic curves estimated on the changes in the external environment. The algorithm theory and MATLAB simulation flow chart was given in the paper. The simulation results show that the algorithm can significantly improve the tracking precision and speed of MPPT.%为了提高光伏电池的转换效率,基于光伏阵列的数学模型,针对传统的定步长扰动观察法实现最大功率点跟踪(Maximum Power Point Tracking,MPPT)时,存在的振荡现象和误判现象,提出了一种改进的变步长与功率预测相结合的扰动观察法.通过采用近似梯度法替代最优梯度法,并对外界环境发生变化时,采用功率预测的方法对多条特性曲线进行预估,来消除震荡和误判问题.本文给出了该方法的理论推导和Matlab仿真实现流程图.仿真结果表明,该方法能够显著提高MPPT的跟踪精度和速度.
Lu, Wei, E-mail: wlu@umm.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Neuner, Geoffrey A.; George, Rohini; Wang, Zhendong; Sasor, Sarah [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Huang, Xuan [Research and Development, Care Management Department, Johns Hopkins HealthCare LLC, Glen Burnie, Maryland (United States); Regine, William F.; Feigenberg, Steven J.; D' Souza, Warren D. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States)
2014-01-01
Purpose: To investigate whether coaching patients' breathing would improve the match between ITV{sub MIP} (internal target volume generated by contouring in the maximum intensity projection scan) and ITV{sub 10} (generated by combining the gross tumor volumes contoured in 10 phases of a 4-dimensional CT [4DCT] scan). Methods and Materials: Eight patients with a thoracic tumor and 5 patients with an abdominal tumor were included in an institutional review board-approved prospective study. Patients underwent 3 4DCT scans with: (1) free breathing (FB); (2) coaching using audio-visual (AV) biofeedback via the Real-Time Position Management system; and (3) coaching via a spirometer system (Active Breathing Coordinator or ABC). One physician contoured all scans to generate the ITV{sub 10} and ITV{sub MIP}. The match between ITV{sub MIP} and ITV{sub 10} was quantitatively assessed with volume ratio, centroid distance, root mean squared distance, and overlap/Dice coefficient. We investigated whether coaching (AV or ABC) or uniform expansions (1, 2, 3, or 5 mm) of ITV{sub MIP} improved the match. Results: Although both AV and ABC coaching techniques improved frequency reproducibility and ABC improved displacement regularity, neither improved the match between ITV{sub MIP} and ITV{sub 10} over FB. On average, ITV{sub MIP} underestimated ITV{sub 10} by 19%, 19%, and 21%, with centroid distance of 1.9, 2.3, and 1.7 mm and Dice coefficient of 0.87, 0.86, and 0.88 for FB, AV, and ABC, respectively. Separate analyses indicated a better match for lung cancers or tumors not adjacent to high-intensity tissues. Uniform expansions of ITV{sub MIP} did not correct for the mismatch between ITV{sub MIP} and ITV{sub 10}. Conclusions: In this pilot study, audio-visual biofeedback did not improve the match between ITV{sub MIP} and ITV{sub 10}. In general, ITV{sub MIP} should be limited to lung cancers, and modification of ITV{sub MIP} in each phase of the 4DCT data set is recommended.
Yu, Lin; Norton, Sam; McCracken, Lance M
2017-06-01
Acceptance and commitment therapy (ACT) is based on the psychological flexibility model, which includes a therapeutic process referred to as "self-as-context" (SAC). This study investigates whether ACT is associated with an effect on SAC and whether this effect is linked to treatment outcomes in people with chronic pain. Four hundred twelve adults referred to a pain management center participated in the study. Participants completed measures of treatment processes (SAC, pain acceptance) and outcomes (pain-related interference, work and social adjustment, depression) before treatment, upon completion of treatment, and at 9-month follow-up. Paired sample t-tests and analyses of meaningful change were conducted to examine changes in processes and outcomes. Regression analyses with residualized change scores from process and outcome variables, and bivariate growth curve modeling were used to examine the association between change in SAC and change in outcomes. Participants significantly improved on all process and outcome variables at post-treatment (d = .38-.98) and 9-month follow-up (d = .24-.75). Forty-two to 67.5% of participants showed meaningful improvements on each outcome at post-treatment and follow-up. Change in SAC was associated with change in outcomes (β = -.21 to -.31; r = -.16 to -.46). Results support a role for change in SAC in treatment as the psychological flexibility model suggested. This study shows the delivery of a treatment for chronic pain based on ACT was associated with improved SAC and improved functioning for people with chronic pain, and increases in SAC were associated with improved functioning. These results can inform future treatment development. Copyright © 2017 American Pain Society. All rights reserved.
Christopher M. Fulkerson
2017-01-01
Full Text Available Genomic analyses are defining numerous new targets for cancer therapy. Therapies aimed at specific genetic and epigenetic targets in cancer cells as well as expanded development of immunotherapies are placing increased demands on animal models. Traditional experimental models do not possess the collective features (cancer heterogeneity, molecular complexity, invasion, metastasis, and immune cell response critical to predict success or failure of emerging therapies in humans. There is growing evidence, however, that dogs with specific forms of naturally occurring cancer can serve as highly relevant animal models to complement traditional models. Invasive urinary bladder cancer (invasive urothelial carcinoma (InvUC in dogs, for example, closely mimics the cancer in humans in pathology, molecular features, biological behavior including sites and frequency of distant metastasis, and response to chemotherapy. Genomic analyses are defining further intriguing similarities between InvUC in dogs and that in humans. Multiple canine clinical trials have been completed, and others are in progress with the aim of translating important findings into humans to increase the success rate of human trials, as well as helping pet dogs. Examples of successful targeted therapy studies and the challenges to be met to fully utilize naturally occurring dog models of cancer will be reviewed.
A. M. Yusop
2017-02-01
Full Text Available This study presents the development of a novel maximum-power point-tracking (MPPT method based on an input shaping scheme controller. The proposed method that changes the initial input response into a shapeable MPPT algorithm is designed based on an exponential input function. This type of input function is selected because of its capability to stabilize the system at the end of the simulation time and remain at the same condition at the final response time. A comparison of the system with the proposed method and the system with traditional perturb and observe (PnO method is also provided. Results show that the system with the proposed method produces higher output power than the system with PnO method; the difference is approximately 15.45%. Results reveal that the exponential function input shaper allows the overall output system to exhibit satisfactory behavior and can efficiently track the maximum output power.
Kaiadi, Mehrzad; Tunestål, Per; Johansson, Bengt
2010-01-01
High EGR rates combined with turbocharging has been identified as a promising way to increase the maximum load and efficiency of heavy duty spark ignition Natural Gas engines. With stoichiometric conditions a three way catalyst can be used which means that regulated emissions can be kept at very low levels. Most of the heavy duty NG engines are diesel engines which are converted for SI operation. These engine's components are in common with the diesel-engine which put limits on higher exh...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Esposito, Rosario; Mensitieri, Giuseppe; de Nicola, Sergio
2015-12-21
A new algorithm based on the Maximum Entropy Method (MEM) is proposed for recovering both the lifetime distribution and the zero-time shift from time-resolved fluorescence decay intensities. The developed algorithm allows the analysis of complex time decays through an iterative scheme based on entropy maximization and the Brent method to determine the minimum of the reduced chi-squared value as a function of the zero-time shift. The accuracy of this algorithm has been assessed through comparisons with simulated fluorescence decays both of multi-exponential and broad lifetime distributions for different values of the zero-time shift. The method is capable of recovering the zero-time shift with an accuracy greater than 0.2% over a time range of 2000 ps. The center and the width of the lifetime distributions are retrieved with relative discrepancies that are lower than 0.1% and 1% for the multi-exponential and continuous lifetime distributions, respectively. The MEM algorithm is experimentally validated by applying the method to fluorescence measurements of the time decays of the flavin adenine dinucleotide (FAD).
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Kuzuha, Yasuhisa; Sivapalan, Murugesu; Tomosugi, Kunio; Kishii, Tokuo; Komatsu, Yosuke
2006-04-01
Eagleson's classical regional flood frequency model is investigated. Our intention was not to improve the model, but to reveal previously unidentified important and dominant hydrological processes in it. The change of the coefficient of variation (CV) of annual maximum discharge with catchment area can be viewed as representing the spatial variance of floods in a homogeneous region. Several researchers have reported that the CV decreases as the catchment area increases, at least for large areas. On the other hand, Eagleson's classical studies have been known as pioneer efforts that combine the concept of similarity analysis (scaling) with the derived flood frequency approach. As we have shown, the classical model can reproduce the empirical relationship between the mean annual maximum discharge and catchment area, but it cannot reproduce the empirical decreasing CV-catchment area curve. Therefore, we postulate that previously unidentified hydrological processes would be revealed if the classical model were improved to reproduce the decreasing of CV with catchment area. First, we attempted to improve the classical model by introducing a channel network, but this was ineffective. However, the classical model was improved by introducing a two-parameter gamma distribution for rainfall intensity. What is important is not the gamma distribution itself, but those characteristics of spatial variability of rainfall intensity whose CV decreases with increasing catchment area. Introducing the variability of rainfall intensity into the hydrological simulations explains how the CV of rainfall intensity decreases with increasing catchment area. It is difficult to reflect the rainfall-runoff processes in the model while neglecting the characteristics of rainfall intensity from the viewpoint of annual flood discharge variances.
Research of Text Categorization Based on Improved Maximum Entropy Algorithm%改进的最大熵权值算法在文本分类中的应用
李学相
2012-01-01
This paper discussed the problems in text categorization accuracy. In traditional text classification algorithm, different feature words have the same affecte on classification result,and classification accuracy is lower,causing the increase algorithm time complexity. Because the maximum entropy model can integrated various relevant or irrelevant probability knowledge observed, the processing of many issues can achieve better results. In order to solve the above problems, this paper proposed an improved maximum entropy text classification, which fully combines c-mean and maximum entropy algorithm advantages. The algorithm firstly takes shannon entropy as maximum entropy model of the objective function, simplifies classifier expression form, and then uses c-mean algorithm to classify the optimal feature. The simulation results show that the proposed method can quickly get the optimal classification feature subsets, greatly improve text classification accuracy, compared with the traditional text classification.%由于传统算法存在着特征词不明确、分类结果有重叠、工作效率低的缺陷,为了解决上述问题,提出了一种改进的最大熵文本分类方法.最大熵模型可以综合观察到的各种相关或不相关的概率知识,对许多问题的处理都可以达到较好的结果.提出的方法充分结合了均值聚类和最大熵值算法的优点,算法首先以香农熵作为最大熵模型中的目标函数,简化分类器的表达形式,然后采用均值聚类算法对最优特征进行分类.经过实验论证,所提出的新算法能够在较短的时间内获得分类后得到的特征集,大大缩短了工作的时间,同时提高了工作的效率.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
The effect of natural selection on the performance of maximum parsimony
Ofria Charles
2007-06-01
Full Text Available Abstract Background Maximum parsimony is one of the most commonly used and extensively studied phylogeny reconstruction methods. While current evaluation methodologies such as computer simulations provide insight into how well maximum parsimony reconstructs phylogenies, they tell us little about how well maximum parsimony performs on taxa drawn from populations of organisms that evolved subject to natural selection in addition to the random factors of drift and mutation. It is clear that natural selection has a significant impact on Among Site Rate Variation (ASRV and the rate of accepted substitutions; that is, accepted mutations do not occur with uniform probability along the genome and some substitutions are more likely to occur than other substitutions. However, little is know about how ASRV and non-uniform character substitutions impact the performance of reconstruction methods such as maximum parsimony. To gain insight into these issues, we study how well maximum parsimony performs with data generated by Avida, a digital life platform where populations of digital organisms evolve subject to natural selective pressures. Results We first identify conditions where natural selection does affect maximum parsimony's reconstruction accuracy. In general, as we increase the probability that a significant adaptation will occur in an intermediate ancestor, the performance of maximum parsimony improves. In fact, maximum parsimony can correctly reconstruct small 4 taxa trees on data that have received surprisingly many mutations if the intermediate ancestor has received a significant adaptation. We demonstrate that this improved performance of maximum parsimony is attributable more to ASRV than to non-uniform character substitutions. Conclusion Maximum parsimony, as well as most other phylogeny reconstruction methods, may perform significantly better on actual biological data than is currently suggested by computer simulation studies because of natural
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Obasa, Temitope O.; Sowunmi, Funmilola Olusola
2012-01-01
Myasis is the infestation of skin by larvae or maggots of a variety of flies. It is a condition that occurs more commonly in adults who are living and/or have visited tropical countries. It rarely occurs in neonates, and even when seen, only few larvae are extracted. This case report describes myasis occurring in an 11-day-old female who had 47 larvae in her skin. PMID:23355934
Temitope O. Obasa
2012-12-01
Full Text Available Myasis is the infestation of skin by larvae or maggots of a variety of flies. It is a condition that occurs more commonly in adults who are living and/or have visited tropical countries. It rarely occurs in neonates, and even when seen, only few larvae are extracted. This case report describes myasis occurring in an 11-day-old female who had 47 larvae in her skin.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
袁志辉; 邓云凯; 李飞; 王宇; 柳罡
2013-01-01
In the application of getting the earth surface’s Digital Elevation Model (DEM) through InSAR technology, multichannel (multi-frequency or multi-baseline) InSAR technique can be employed to improve the mapping ability for complex areas with high slopes or strong height discontinuities, and solve the ambiguity problem which existed in the situation of single baseline. This paper compares the performance of Maxmum Likelihood (ML) estimation techniques with Maximum A Posteriori (MAP) estimation techniques, and adds two steps of bad pixels judgment and weighted filtering after the ML estimation. Bad pixels judgment is completed through cluster analysis and the relationship between adjacent pixels. A special weighted mean filter is used to remove the bad pixels. In this way, the advantage of the ML method’s good efficiency is kept, and the accuracy of DEM also is improved. Simulation results indicate that this method can not only keep good accuracy but also improve greatly the computation efficiency under the same condition, which is advantageous for processing large scale of data sets.%在通过InSAR技术获取地表数字高程模型(DEM)的应用中，为了提高该技术对大斜坡或突变等复杂地形的测绘能力，解决单基线情况下的高度模糊问题，可以利用多通道(多频率或多基线)InSAR技术实现。该文比较了最大似然估计法(ML)和最大后验概率估计法(MAP)的性能，并在最大似然估计法的基础上增加了坏点判断和加权均值滤波的环节，通过聚类分析和与相邻点的关系来判断目标像素是否为误差比较大的坏点，然后再利用加权均值滤波的方法将这些坏点剔除。这样，既保留了ML估计法速度快的特点，又提高了DEM的精度。仿真结果表明，在相同条件下，该方法既能保持较好的精度，同时又大大提高了算法的运行效率，非常有利于大规模数据的处理。
Angiodysplasia Occurring in Jejunal Diverticulosis
Edward A Jones; Hugh Chaun; Phillip Switzer; David J Clow; Ronald J Hancock
1990-01-01
The first case of angiodysplasia occurring in acquired jejunal diverticulosis is reported. The patient presented with occult gastrointestinal bleeding and chronic anemia, and was created successfully by resection of a 25 cm long segment of jejunum. Possible pathogenetic mechanisms for both angiodysplasia and jejunal diverticulosis are discussed.
Naturally Occurring Radioactive Materials (NORM)
Gray, P. [ed.
1997-02-01
This paper discusses the broad problems presented by Naturally Occuring Radioactive Materials (NORM). Technologically Enhanced naturally occuring radioactive material includes any radionuclides whose physical, chemical, radiological properties or radionuclide concentration have been altered from their natural state. With regard to NORM in particular, radioactive contamination is radioactive material in an undesired location. This is a concern in a range of industries: petroleum; uranium mining; phosphorus and phosphates; fertilizers; fossil fuels; forestry products; water treatment; metal mining and processing; geothermal energy. The author discusses in more detail the problem in the petroleum industry, including the isotopes of concern, the hazards they present, the contamination which they cause, ways to dispose of contaminated materials, and regulatory issues. He points out there are three key programs to reduce legal exposure and problems due to these contaminants: waste minimization; NORM assesment (surveys); NORM compliance (training).
Bae, Yun Jung; Choi, Byung Se; Yoon, Yeon Hong; Woo, Leonard Sun; Jung, Cheol Kyu; Kim, Jae Hyoung [Dept. of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Lee, Kyung Mi [Dept. of Radiology, Kyung Hee University College of Medicine, Kyung Hee University Hospital, Seoul (Korea, Republic of)
2017-08-01
To evaluate the diagnostic benefits of 5-mm maximum intensity projection of improved motion-sensitized driven-equilibrium prepared contrast-enhanced 3D T1-weighted turbo-spin echo imaging (MIP iMSDE-TSE) in the detection of brain metastases. The imaging technique was compared with 1-mm images of iMSDE-TSE (non-MIP iMSDE-TSE), 1-mm contrast-enhanced 3D T1-weighted gradient-echo imaging (non-MIP 3D-GRE), and 5-mm MIP 3D-GRE. From October 2014 to July 2015, 30 patients with 460 enhancing brain metastases (size > 3 mm, n = 150; size ≤ 3 mm, n = 310) were scanned with non-MIP iMSDE-TSE and non-MIP 3D-GRE. We then performed 5-mm MIP reconstruction of these images. Two independent neuroradiologists reviewed these four sequences. Their diagnostic performance was compared using the following parameters: sensitivity, reading time, and figure of merit (FOM) derived by jackknife alternative free-response receiver operating characteristic analysis. Interobserver agreement was also tested. The mean FOM (all lesions, 0.984; lesions ≤ 3 mm, 0.980) and sensitivity ([reader 1: all lesions, 97.3%; lesions ≤ 3 mm, 96.2%], [reader 2: all lesions, 97.0%; lesions ≤ 3 mm, 95.8%]) of MIP iMSDE-TSE was comparable to the mean FOM (0.985, 0.977) and sensitivity ([reader 1: 96.7, 99.0%], [reader 2: 97, 95.3%]) of non-MIP iMSDE-TSE, but they were superior to those of non-MIP and MIP 3D-GREs (all, p < 0.001). The reading time of MIP iMSDE-TSE (reader 1: 47.7 ± 35.9 seconds; reader 2: 44.7 ± 23.6 seconds) was significantly shorter than that of non-MIP iMSDE-TSE (reader 1: 78.8 ± 43.7 seconds, p = 0.01; reader 2: 82.9 ± 39.9 seconds, p < 0.001). Interobserver agreement was excellent (κ > 0.75) for all lesions in both sequences. MIP iMSDE-TSE showed high detectability of brain metastases. Its detectability was comparable to that of non-MIP iMSDE-TSE, but it was superior to the detectability of non-MIP/MIP 3D-GREs. With a shorter reading time, the false-positive results of MIP i
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Rosec, Jean-Philippe; Causse, Véronique; Cruz, Barbara; Rauzier, Jean; Carnat, Laurence
2012-07-02
During two surveys conducted in 2008 and 2009, the culture method described in the international standard ISO/TS 21872-1 was applied to the detection of Vibrio parahaemolyticus and Vibrio cholerae in 112 living bivalve mollusc samples, with a chromogenic medium used in addition to the TCBS agar, as second selective isolation medium and for enumeration of V. parahaemolyticus and V. cholerae by surface inoculation. A PCR method for detection of these 2 Vibrio species and the hemolysin genes tdh and trh, was applied in parallel. In 2009, the survey was extended to finfish fillets and crustaceans. PCR was also used for species confirmation of characteristic colonies. The identity of the PCR products, specifically targeting V. parahaemolyticus, was checked by sequencing. Occurrence of V. parahaemolyticus and V. cholerae isolates in living bivalve molluscs ranged from 30.4% to 32.6% and from 1.4% to 4.7% respectively. In frozen crustaceans (2009 survey) V. parahaemolyticus and V. cholerae isolates were respectively found in 45% and 10% of the samples. No V. parahaemolyticus or V. cholerae was detected in frozen fish fillets, neither by the ISO method nor by PCR. In 2009, enteropathogenic V. parahaemolyticus (trh+) was isolated from 4 out of 43 oyster samples while the trh gene was present in V. alginolyticus strains and in samples where V. parahaemolyticus was not detected (9 over 112 samples). The ISO method failed to isolate V. parahaemolyticus in 44% to 53% of the living bivalve molluscs where PCR detected the toxR gene specific of V. parahaemolyticus (Vp-toxR). Our results highlighted the need for a revision of the ISO/TS 21872-1 standard, at least, for analysis of living bivalve molluscs, and confirmed the increasing concern of enteropathogenic V. parahaemolyticus in French bivalve molluscs. Enrichment at 41.5°C was questioned and some reliable solutions for the improvement of the ISO/TS 21872-1 method, such as the PCR method for screening of positive samples and
When Yawning Occurs in Elephants
Rossman, Zoë T.; Hart, Benjamin L.; Greco, Brian J.; Young, Debbie; Padfield, Clare; Weidner, Lisa; Gates, Jennifer; Hart, Lynette A.
2017-01-01
Yawning is a widely recognized behavior in mammalian species. One would expect that elephants yawn, although to our knowledge, no one has reported observations of yawning in any species of elephant. After confirming a behavioral pattern matching the criteria of yawning in two Asian elephants (Elephas maximus) in a zoological setting, this study was pursued with nine captive African elephants (Loxodonta africana) at a private reserve in the Western Cape, South Africa, the Knysna Elephant Park. Observations were made in June–September and in December. In the daytime, handlers managed seven of the elephants for guided interactions with visitors. At night, all elephants were maintained in a large enclosure with six having limited outdoor access. With infrared illumination, the elephants were continuously recorded by video cameras. During the nights, the elephants typically had 1–3 recumbent sleeping/resting bouts, each lasting 1–2 h. Yawning was a regular occurrence upon arousal from a recumbency, especially in the final recumbency of the night. Yawning was significantly more frequent in some elephants. Yawning was rare during the daytime and during periods of standing around in the enclosure at night. In six occurrences of likely contagious yawning, one elephant yawned upon seeing another elephant yawning upon arousal from a final recumbency; we recorded the sex and age category of the participants. The generality of yawning in both African and Asian elephants in other environments was documented in video recordings from 39 zoological facilities. In summary, the study provides evidence that yawning does occur in both African and Asian elephants, and in African elephants, yawning was particularly associated with arousal from nighttime recumbencies. PMID:28293560
党克; 陆雯雯; 严干贵
2016-01-01
为了快速有效地追踪光伏阵列输出的最大功率点，设计一种基于指数趋近律的滑模控制方法对其进行追踪。根据光伏阵列输出的最大功率点特性来设计该系统的切换超平面，通过滑动模态控制器使系统状态从超平面之外向切换超平面收束。为了使系统在快速趋于滑模面的同时削弱抖振，在切换超平面的附近设计一个临界值，并根据指数趋近律的定义，更好地把握趋近律参数的设定。仿真实验表明该方法可以快速准确地追踪到系统的最大功率点，并且在滑模控制趋近滑模面时较好地削弱抖振。%In order to rapidly and effectively track the maximum power point of photovoltaic array,this paper designs a kind of sliding mode control method based on exponential approach law. According to characteristic of the output maximum pow-er point,switching hyperplane of the system is designed and a sliding mode controller is used to collect system states from ex-ternal hyperplane to the hyperplane. For weakening fluttering of the system at the same time of its verging to sliding mode surface,a critical value near to the switching hyperplane is designed. It is able to well control setting for parameters of ap-proach law based on concept of exponential approach law. Simulating experiment indicates that this method is useful to rap-idly and correctly track the maximum power point of the system and well weaken fluttering at the time of sliding mode con-trol approaching to the sliding mode surface.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
王雪丽; 陶剑; 史宁中
2005-01-01
The primary goal of a phase I clinical trial is to find the maximum tolerable dose of a treatment. In this paper, we propose a new stepwise method based on confidence bound and information incorporation to determine the maximum tolerable dose among given dose levels. On the one hand, in order to avoid severe even fatal toxicity to occur and reduce the experimental subjects, the new method is executed from the lowest dose level, and then goes on in a stepwise fashion. On the other hand,in order to improve the accuracy of the recommendation, the final recommendation of the maximum tolerable dose is accomplished through the information incorporation of an additional experimental cohort at the same dose level. Furthermore, empirical simulation results show that the new method has some real advantages in comparison with the modified continual reassessment method.
Forecasting ozone daily maximum levels at Santiago, Chile
Jorquera, Héctor; Pérez, Ricardo; Cipriano, Aldo; Espejo, Andrés; Victoria Letelier, M.; Acuña, Gonzalo
In major urban areas, air pollution impact on health is serious enough to include it in the group of meteorological variables that are forecast daily. This work focusses on the comparison of different forecasting systems for daily maximum ozone levels at Santiago, Chile. The modelling tools used for these systems were linear time series, artificial neural networks and fuzzy models. The structure of the forecasting model was derived from basic principles and it includes a combination of persistence and daily maximum air temperature as input variables. Assessment of the models is based on two indices: their ability to forecast well an episode, and their tendency to forecast an episode that did not occur at the end (a false positive). All the models tried in this work showed good forecasting performance, with 70-95% of successful forecasts at two monitor sites: Downtown (moderate impacts) and Eastern (downwind, highest impacts). The number of false positives was not negligible, but this may be improved by expressing the forecast in broad classes: low, average, high, very high impacts; the fuzzy model was the most reliable forecast, with the lowest number of false positives among the different models evaluated. The quality of the results and the dynamics of ozone formation suggest the use of a forecast to warn people about excessive exposure during episodic days at Santiago.
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
刘军; 王得发; 薛蓉
2016-01-01
The MPPT technology is used a lot in the photovoltaic power generation system,but there are some shortcomings and deficiencies in practical application,such as tracking not fast enough and sometimes oscillation problems.Considering PV system exists to tracking slow and oscillating problems during MPPT,on the analysis of the perturbation and observation method and the hysteresis comparison method we proposes a new MPPT control method which combines the advantages of the two methods and makes the system control technology better. And by comparison with the traditional simulation of disturbance observation method,it verifies that the new method can track the maximum power point quickly,and when the sunshine,temperature changes can effectively reduce the oscillation at the maximum power point of the photovoltaic cell,and verify the correctness and validity of the method.%最大功率跟踪(MPPT)技术是光伏系统中经常使用的跟踪技术，但在使用中存在一定的缺陷和不足之处，如跟踪速度慢和振荡。鉴于这些问题，在此提出了一种结合型的 MPPT 控制方法，该方法在分析了扰动观察法的优势和不足以及概述了滞环比较法原理的基础上，将扰动观察法的跟踪优势与滞环比较法的滞环原理相结合，实现了系统控制方法的优化。并通过与传统的控制方法的仿真图进行对比，通过对比得出该改进方法能快速跟踪到最大功率点及有效减小振荡，验证了该方法的正确性和有效性。
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Needs to Update Probable Maximum Precipitation for Critical Infrastructure
Pathak, C. S.; England, J. F.
2015-12-01
Probable Maximum Precipitation (PMP) is theoretically the greatest depth of precipitation for a given duration that is physically possible over a given size storm area at a particular geographical location at a certain time of the year. It is used to develop inflow flood hydrographs, known as Probable Maximum Flood (PMF), as design standard for high-risk flood-hazard structures, such as dams and nuclear power plants. PMP estimation methodology was developed in the 1930s and 40s when many dams were constructed in the US. The procedures to estimate PMP were later standardized by the World Meteorological Organization (WMO) in 1973 and revised in 1986.In the US, PMP estimates were published in a series of Hydrometeorological Reports (e.g., HMR55A, HMR57, and HMR58/59) by the National Weather Service since 1950s. In these reports, storm data up to 1980s were used to establish the current PMP estimates. Since that time, we have acquired additional meteorological data for 30 to 40 years, including newly available radar and satellite based precipitation data. These data sets are expected to have improved data quality and availability in both time and space. In addition, significant numbers of extreme storms have occurred and selected numbers of these events were even close to or exceeding the current PMP estimates, in some cases. In the last 50 years, climate science has progressed and scientists have better and improved understanding of atmospheric physics of extreme storms. However, applied research in estimation of PMP has been lagging behind. Alternative methods, such as atmospheric numerical modeling, should be investigated for estimating PMP and associated uncertainties. It would be highly desirable if regional atmospheric numerical models could be utilized in the estimation of PMP and their uncertainties, in addition to methods used to originally develop PMP index maps in the existing hydrometeorological reports.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Evaluating the time limit at maximum aerobic speed in elite swimmers. Training implications.
Renoux, J C
2001-12-01
The aim of the present study was to make use of the concepts of maximum aerobic speed (MAS) and time limit (tlim) in order to determine the relationship between these two elements, and this in an attempt to significantly improve both speed and swimming performance during a training season. To this same end, an intermittent training model was used, which was adapted to the value obtained for the time limit at maximum aerobic speed. During a 12 week training period, the maximum aerobic speed for a group of 9 top-ranking varsity swimmers was measured on two occasions, as was the tlim. The values generated indicated that: 1) there was an inverse relationship between MAS and the time this speed could be maintained, thus confirming the studies by Billat et al. (1994b); 2) a significant increase in MAS occurred over the 12 week period, although no such evolution was seen for the tlim; 3) there was an improvement in results; 4) the time limit could be used in designing a training program based on intermittent exercises. In addition, results of the present study should allow swimming coaches to draw up individualized training programs for a given swimmer by taking into consideration maximum aerobic speed, time limit and propelling efficiency.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
An O (n log n) algorithm for the Maximum Agreement Subtree problem for binary trees
Cole, R.; Hariharan, R.
1996-12-31
The Maximum Agreement Subtree problem is the following: given two trees whose leaves are drawn from the same set of items (e.g., species), find the largest subset of these items so that the portions of the two trees restricted to these items are isomorphic. We consider the case which occurs frequently in practice, i.e., the case when the trees are binary, and give an O(n log n) time algorithm for this problem. This improves the previous best bound of O(n log{sup 3} n) due to Farach, Przytycka, and Thorup.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum agreement subtree problem
Martin, Daniel M
2012-01-01
Given two binary phylogenetic trees on $n$ leaves, we show that they have a common subtree on at least $O((\\log{n})^{1/2-\\epsilon})$ leaves, thus improving on the previously known bound of $O(\\log\\log n)$. To achieve this bound, we combine different special cases: when one of the trees is balanced or when one of the trees is a caterpillar, we show a lower bound of $O(\\log n)$. Another ingredient is the proof that every binary tree contains a large balanced subtree or a large caterpillar, a result that is intersting on its own. Finally, we also show that, there is an $\\alpha > 0$ such that when both the trees are balanced, they have a common subtree on at least $O(n^\\alpha)$ leaves.
孙俊
2009-01-01
为了提高图像阈值分割算法的应用广适性和处理实时性,该文在二维最大类间方差分割算法的基础上,研究邻域模板尺寸对最佳阈值的影响,将图像的灰度值、邻域尺寸及邻域均值进行遗传基因编码,利用遗传算法得到阈值最优解的小范围,在此小范围内进行二次遗传算法运算寻求全局最优解.将此基于两级遗传算法的二维最大类间方差分割算法应用于黄瓜计算机视觉识别目标试验中,试验结果表明,在计算类间方差次数上,基于两级遗传算法的二维最大类间方差算法分别为二维最大类间方差耗时的0.18%和一维Otsu算法耗时的46.87%,耗时上也较传统二维最大类间方差算法和一维Otsu算法有很大缩短,分割效果也有了明显改善.同时该算法也为目标识别领域提供了一种新型的实时图像分割方法,具有一定的推广价值.%In order to improve the wide adaptability and the real-time processing property of image threshold segmentation algorithm, a 2D maximum between-cluster variance image segmentation algorithm was brought forward. On the base of 2D maximum between-cluster variance algorithm, the impact of the size of the neighborhood template on the best threshold value was studied, and not only the gray level information of each pixel and its spatial correlation information within the neighborhood, but also the dimension of neighborhood domain were encoded by genetic factors. The small range of the optimal threshold was gotten based on genetic algorithm, and in the small range, the global optimal threshold was found based on the second genetic algorithm computing. The improved algorithm was applied into cucumber computer vision system. The experiment results showed that, the consuming time of computing between-cluster variance of 2D maximum between-cluster variance algorithm based on two level genetic algorithm was 0.18% more than that of 2D maximum between-cluster variance
Barrows, Timothy T.; Juggins, Steve
2005-04-01
We present new last glacial maximum (LGM) sea-surface temperature (SST) maps for the oceans around Australia based on planktonic foraminifera assemblages. To provide the most reliable SST estimates we use the modern analog technique, the revised analog method, and artificial neural networks in conjunction with an expanded modern core top database. All three methods produce similar quality predictions and the root mean squared error of the consensus prediction (the average of the three) under cross-validation is only ±0.77 °C. We determine LGM SST using data from 165 cores, most of which have good age control from oxygen isotope stratigraphy and radiocarbon dates. The coldest SST occurred at 20,500±1400 cal yr BP, predating the maximum in oxygen isotope records at 18,200±1500 cal yr BP. During the LGM interval we observe cooling within the tropics of up to 4 °C in the eastern Indian Ocean, and mostly between 0 and 3 °C elsewhere along the equator. The high latitudes cooled by the greatest degree, a maximum of 7-9 °C in the southwest Pacific Ocean. Our maps improve substantially on previous attempts by making higher quality temperature estimates, using more cores, and improving age control.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
OCCURENCE OF MERCURY IN PET FOOD
M.C. Abete
2013-02-01
Full Text Available Mercury levels in 61 complete pet feed containing fish were evaluated. In five samples a mercury content exceeding the maximum residues level (0.4 mg/kg was detected. The statistical evaluation didn’t show a significant correlation between the percentage of fish in feedingstuffs and the contamination level.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Information Needs While A Disaster Is Occurring
Perry, S. C.
2010-12-01
Evidence from recent earthquakes, wildfires, and debris flows in southern California indicates that many people - local officials as well as residents and visitors - lack important understanding during the time that a disaster is unfolding, a time of uncertainty and confusion. While some of the uncertainty is inherent, some could be alleviated. Physical scientists and engineers know what to expect as the event unfolds. Social scientists know how humans will react during a disaster, and how to effectively communicate the warnings or evacuation orders that may precede it. Such knowledge can improve public safety. As just a few of many examples: - Based on questions posed at numerous public talks, many individuals who practice "Drop Cover and Hold" during earthquake drills do not understand what they are protecting themselves against, and thus cannot determine what to do when an earthquake strikes and they have no cover available. Similarly, they do not know how to act during the aftershocks that follow. - The 2009 Station Fire in the San Gabriel Mountains put foothills communities at risk, first from the wildfire and then from debris flows. Some neighborhoods received multiple evacuation notices during a few days or months. Local officials have expressed frustration and concern about an evacuation compliance rate that is steadily dropping and is now below 50%. The debris flow danger will persist over the next 2-4 winters yet evacuation compliance may drop lower still. - On February 6, 2010, a significant rainstorm brought the threat of imminent debris flows to areas burned by the Station Fire. In one neighborhood, residents loaded their cars with important belongings then waited for indications that they should evacuate. Powerful debris flows suddenly appeared, sweeping the cars downhill and destroying both cars and belongings. Some residents did understand that rainfall intensity would control the generation of debris flows in that storm. But they didn't understand
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Allergies and Asthma: They Often Occur Together
... you miserable. A lot, as it turns out. Allergies and asthma often occur together. The same substances that trigger your hay fever symptoms, such as pollen, dust mites and pet dander, may also cause asthma signs ...
Multiple Primary Cancers: Simultaneously Occurring Prostate ...
2016-05-20
May 20, 2016 ... occurring prostate cancer and other primary tumors-our experience and literature ... carcinoma, primary liver cell carcinoma, and thyroid follicular carcinoma in both ..... malignancies in women with papillary thyroid cancer.
ST elevation occurring during stress testing
Diana Malouf
2016-04-01
Full Text Available A case is presented of significant reversible ST elevation occurring during treadmill testing, and the coronary anatomy and subsequent course are described, indicating that ischemia is a potential cause of this electrocardiographic finding.
Maximum Likelihood Under Response Biased Sampling\\ud
Chambers, Raymond; Dorfman, Alan; Wang, Suojin
2003-01-01
Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...
Study of maximum pressure for composite hepta-tubular powders
M. C. Gupta
1959-10-01
Full Text Available In this paper the expressions for maximum pressure occurring positions in the case of composite hepta-tubular powers used in conventional guns and the corresponding conditions have been derived under certain conditions, viz., the value of n, the ratio of specific heats, has been assumed to be the same for both the charges and the covolume corrections have not been neglected.
Trichotillomania and Co-occurring Anxiety
Grant, Jon E.; Redden, Sarah A.; Leppink, Eric W.; Chamberlain, Samuel R.
2017-01-01
Background Trichotillomania appears to be a fairly common disorder, with high rates of co-occurring anxiety disorders. Many individuals with trichotillomania also report that pulling worsens during periods of increased anxiety. Even with these clinical links to anxiety, little research has explored whether trichotillomania with co-occurring anxiety is a meaningful subtype. Methods 165 adults with trichotillomania were examined on a variety of clinical measures including symptom severity, functioning, and comorbidity. Participants also underwent cognitive testing assessing motor inhibition and cognitive flexibility. Clinical features and cognitive functioning were compared between those with current co-occurring anxiety disorders (i.e. social anxiety, generalized anxiety disorder, panic disorder, and anxiety disorder NOS) (n=38) and those with no anxiety disorder (n=127). Results Participants with trichotillomania and co-occurring anxiety reported significantly worse hair pulling symptoms, were more likely to have co-occurring depression, and were more likely to have a first-degree relative with obsessive compulsive disorder. Those with anxiety disorders also exhibited significantly worse motor inhibitory performance on a task of motor inhibition (stop-signal task). Conclusions This study suggests that anxiety disorders affect the clinical presentation of hair pulling behavior. Further research is needed to validate our findings and to consider whether treatments should be specially tailored differently for adults with trichotillomania who have co-occurring anxiety disorders, or more pronounced cognitive impairment. PMID:27668531
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Percieved functions of naturally occurring autobiographical memories
Treebak, L. S.; Henriksen, J. R.; Lundhus, S.
2005-01-01
The main empirical reference on functions of autobiographical memories is still Hyman & Faries (1992) who used the cue-word-method and retrospective judgements. We used diaries to sample naturally occurring autobiographical memories and participants? perceived use of these. Results partly replicate...
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Onset of effects of testosterone treatment and time span until maximum effects are achieved
Saad, Farid; Aversa, Antonio; Isidori, Andrea M; Zafalon, Livia; Zitzmann, Michael; Gooren, Louis
2011-01-01
Objective Testosterone has a spectrum of effects on the male organism. This review attempts to determine, from published studies, the time-course of the effects induced by testosterone replacement therapy from their first manifestation until maximum effects are attained. Design Literature data on testosterone replacement. Results Effects on sexual interest appear after 3 weeks plateauing at 6 weeks, with no further increments expected beyond. Changes in erections/ejaculations may require up to 6 months. Effects on quality of life manifest within 3–4 weeks, but maximum benefits take longer. Effects on depressive mood become detectable after 3–6 weeks with a maximum after 18–30 weeks. Effects on erythropoiesis are evident at 3 months, peaking at 9–12 months. Prostate-specific antigen and volume rise, marginally, plateauing at 12 months; further increase should be related to aging rather than therapy. Effects on lipids appear after 4 weeks, maximal after 6–12 months. Insulin sensitivity may improve within few days, but effects on glycemic control become evident only after 3–12 months. Changes in fat mass, lean body mass, and muscle strength occur within 12–16 weeks, stabilize at 6–12 months, but can marginally continue over years. Effects on inflammation occur within 3–12 weeks. Effects on bone are detectable already after 6 months while continuing at least for 3 years. Conclusion The time-course of the spectrum of effects of testosterone shows considerable variation, probably related to pharmacodynamics of the testosterone preparation. Genomic and non-genomic effects, androgen receptor polymorphism and intracellular steroid metabolism further contribute to such diversity. PMID:21753068
Ethical issues occurring within nursing education.
Fowler, Marsha D; Davis, Anne J
2013-03-01
The large body of literature labeled "ethics in nursing education" is entirely devoted to curricular matters of ethics education in nursing schools, that is, to what ought to be the ethics content that is taught and what theory or issues ought to be included in all nursing curricula. Where the nursing literature actually focuses on particular ethical issues, it addresses only single topics. Absent from the literature, however, is any systematic analysis and explication of ethical issues or dilemmas that occur within the context of nursing education. The objective of this article is to identify the spectrum of ethical issues in nursing education to the end of prompting a systematic and thorough study of such issues, and to lay the groundwork for research by identifying and provisionally typologizing the ethical issues that occur within the context of academic nursing.
Molten Metal Explosions are Still Occurring
2009-02-01
pans which can give rise to Force 2 incidents if unheated or contain foreign matter. Handling hot dross represents a particular hazard. Force 3...explosions have occurred from hot dross transfer, cooling and dumping into storage areas. In one recent incident, an employee reportedly dumped a load of...thermiting dross into a water puddle and was fatally burned. Casting: Incidents continue to be reported for dc casting arising from bleed-outs
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein; Ramin Yazdani; Rick Moore; Michelle Byars; Jeff Kieffer; Professor Morton Barlaz; Rinav Mehta
2000-02-26
Controlled landfilling is an approach to manage solid waste landfills, so as to rapidly complete methane generation, while maximizing gas capture and minimizing the usual emissions of methane to the atmosphere. With controlled landfilling, methane generation is accelerated to more rapid and earlier completion to full potential by improving conditions (principally moisture, but also temperature) to optimize biological processes occurring within the landfill. Gas is contained through use of surface membrane cover. Gas is captured via porous layers, under the cover, operated at slight vacuum. A field demonstration project has been ongoing under NETL sponsorship for the past several years near Davis, CA. Results have been extremely encouraging. Two major benefits of the technology are reduction of landfill methane emissions to minuscule levels, and the recovery of greater amounts of landfill methane energy in much shorter times, more predictably, than with conventional landfill practice. With the large amount of US landfill methane generated, and greenhouse potency of methane, better landfill methane control can play a substantial role both in reduction of US greenhouse gas emissions and in US renewable energy. The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with
Incorporating Linguistic Structure into Maximum Entropy Language Models
FANG GaoLin(方高林); GAO Wen(高文); WANG ZhaoQi(王兆其)
2003-01-01
In statistical language models, how to integrate diverse linguistic knowledge in a general framework for long-distance dependencies is a challenging issue. In this paper, an improved language model incorporating linguistic structure into maximum entropy framework is presented.The proposed model combines trigram with the structure knowledge of base phrase in which trigram is used to capture the local relation between words, while the structure knowledge of base phrase is considered to represent the long-distance relations between syntactical structures. The knowledge of syntax, semantics and vocabulary is integrated into the maximum entropy framework.Experimental results show that the proposed model improves by 24% for language model perplexity and increases about 3% for sign language recognition rate compared with the trigram model.
Naturally occurring radionuclides and Earth sciences
G. Ferrara
1997-06-01
Full Text Available Naturally occurring radionuclides are used in Earth sciences for two fundamental purposes: age determination of rocks and minerals and studies of variation of the isotopic composition of radiogenic nuclides. The methodologies that are in use today allow us to determine ages spanning from the Earth's age to the late Quaternary. The variations of isotopic composition of radiogenic nuclides can be applied to problems of mantle evolution, magma genesis and characterization with respect to different geodynamic situations and can provide valuable information not obtainable by elemental geochemistry.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
A network of cancer genes with co-occurring and anti-co-occurring mutations.
Qinghua Cui
Full Text Available Certain cancer genes contribute to tumorigenesis in a manner of either co-occurring or mutually exclusive (anti-co-occurring mutations; however, the global picture of when, where and how these functional interactions occur remains unclear. This study presents a systems biology approach for this purpose. After applying this method to cancer gene mutation data generated from large-scale and whole genome sequencing of cancer samples, a network of cancer genes with co-occurring and anti-co-occurring mutations was constructed. Analysis of this network revealed that genes with co-occurring mutations prefer direct signaling transductions and that the interaction relations among cancer genes in the network are related with their functional similarity. It was also revealed that genes with co-occurring mutations tend to have similar mutation frequencies, whereas genes with anti-co-occurring mutations tend to have different mutation frequencies. Moreover, genes with more exons tend to have more co-occurring mutations with other genes, and genes having lower local coherent network structures tend to have higher mutation frequency. The network showed two complementary modules that have distinct functions and have different roles in tumorigenesis. This study presented a framework for the analysis of cancer genome sequencing outputs. The presented data and uncovered patterns are helpful for understanding the contribution of gene mutations to tumorigenesis and valuable in the identification of key biomarkers and drug targets for cancer.
Probiotic properties of yeasts occurring in fermented food and beverages
Jespersen, Lene
Besides being able to improve the quality and safety of many fermented food and beverages some yeasts offer a number of probiotic traits. Especially a group of yeast referred to as "Saccharomyces boulardii", though taxonomically belonging to Saccharomyces cerevisiae, has been claimed to have...... probiotic properties. Besides, yeasts naturally occurring globally in food and beverages will have traits that might have a positive impact on human health....
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Stratakou, I.; Fels-Klerx, van der H.J.
2010-01-01
In 2006, the European Commission has established maximum levels for ochratoxin A in wine and grape products, using occurrence data up to 2001 and toxicity data up to 2006. This paper presents an up-to-date overview of the occurrence of mycotoxins in grapes and wine produced in Europe in the period 1
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Occurence of organic pollutants in constructed wetlands
TRSKOVÁ, Eliška
2013-01-01
Constructed wetlands are wetlands designed to improve the quality of water. In this work, four representatives of typical organic pollutants in Constructed wetlands are studied : DEET, cotinine, coprostanol and galaxolide as the representatives of insecticide, alkaloid,faecal sterol and musk compound respectively. Moreover three different types of extraction techniques : aqueous two phase extraction (ATPE), liquid-liquid extraction (LLE) and stir bar sorptive extraction (SBSE) - are investiga...
Tetrahydroberberine, a pharmacologically active naturally occurring alkaloid.
Pingali, Subramanya; Donahue, James P; Payton-Stewart, Florastina
2015-04-01
Tetrahydroberberine (systematic name: 9,10-dimethoxy-5,8,13,13a-tetrahydro-6H-benzo[g][1,3]benzodioxolo[5,6-a]quinolizine), C20H21NO4, a widely distributed naturally occurring alkaloid, has been crystallized as a racemic mixture about an inversion center. A bent conformation of the molecule is observed, with an angle of 24.72 (5)° between the arene rings at the two ends of the reduced quinolizinium core. The intermolecular hydrogen bonds that play an apparent role in crystal packing are 1,3-benzodioxole -CH2···OCH3 and -OCH3···OCH3 interactions between neighboring molecules.
Pilomyxoid Astrocytoma Occurring in the Third Ventricle
Sanghyeon Kim
2015-01-01
Full Text Available Pilomyxoid astrocytoma (PMA is a rare central nervous system tumor that has been included in the 2007 World Health Organization Classification of Tumors of the Central Nervous System. Due to its more aggressive behavior, PMA is classified as Grade II neoplasm by the World Health Organization. PMA predominantly affects the hypothalamic/chiasmatic region and occurs in children (mean age of occurrence = 10 months. We report a case of a 24-year-old man who presented with headache, nausea, and vomiting. Brain CT and MRI revealed a mass occupying only the third ventricle. We performed partial resection. Histological findings, including monophasic growth with a myxoid background, and absence of Rosenthal fibers or eosinophilic granular bodies, as well as the strong positivity for glial fibrillary acidic protein were consistent with PMA.
Detection of Harmonic Occurring using Kalman Filtering
Hussain, Dil Muhammad Akbar; Shoro, Ghulam Mustafa; Imran, Raja Muhammed
2014-01-01
As long as the load to a power system is linear which has been the case before 80's, typically no harmonics are produced. However, the modern power electronic equipment for controlled power consumption produces harmonic disturbances, these devices/equipment possess nonlinear voltage/current chara...... using Kalman filter. This may be very useful for example to quickly switching on certain filters based on the harmonic present. We are using a unique technique to detect the occurrence of harmonics......./current characteristic. These harmonics are not to be allowed to grow beyond a certain limit to avoid any grave consequence to the customer’s main supply. Filters can be implemented at the power source or utility location to eliminate these harmonics. In this paper we detect the instance at which these harmonics occur...
Maximum Safety Regenerative Power Tracking for DC Traction Power Systems
Guifu Du
2017-02-01
Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Introduction to naturally occurring radioactive material
Egidi, P.
1997-08-01
Naturally occurring radioactive material (NORM) is everywhere; we are exposed to it every day. It is found in our bodies, the food we eat, the places where we live and work, and in products we use. We are also bathed in a sea of natural radiation coming from the sun and deep space. Living systems have adapted to these levels of radiation and radioactivity. But some industrial practices involving natural resources concentrate these radionuclides to a degree that they may pose risk to humans and the environment if they are not controlled. Other activities, such as flying at high altitudes, expose us to elevated levels of NORM. This session will concentrate on diffuse sources of technologically-enhanced (TE) NORM, which are generally large-volume, low-activity waste streams produced by industries such as mineral mining, ore benefication, production of phosphate Fertilizers, water treatment and purification, and oil and gas production. The majority of radionuclides in TENORM are found in the uranium and thorium decay chains. Radium and its subsequent decay products (radon) are the principal radionuclides used in characterizing the redistribution of TENORM in the environment by human activity. We will briefly review other radionuclides occurring in nature (potassium and rubidium) that contribute primarily to background doses. TENORM is found in many waste streams; for example, scrap metal, sludges, slags, fluids, and is being discovered in industries traditionally not thought of as affected by radionuclide contamination. Not only the forms and volumes, but the levels of radioactivity in TENORM vary. Current discussions about the validity of the linear no dose threshold theory are central to the TENORM issue. TENORM is not regulated by the Atomic Energy Act or other Federal regulations. Control and regulation of TENORM is not consistent from industry to industry nor from state to state. Proposed regulations are moving from concentration-based standards to dose
Introduction to naturally occurring radioactive material
Egidi, P.
1997-08-01
Naturally occurring radioactive material (NORM) is everywhere; we are exposed to it every day. It is found in our bodies, the food we eat, the places where we live and work, and in products we use. We are also bathed in a sea of natural radiation coming from the sun and deep space. Living systems have adapted to these levels of radiation and radioactivity. But some industrial practices involving natural resources concentrate these radionuclides to a degree that they may pose risk to humans and the environment if they are not controlled. Other activities, such as flying at high altitudes, expose us to elevated levels of NORM. This session will concentrate on diffuse sources of technologically-enhanced (TE) NORM, which are generally large-volume, low-activity waste streams produced by industries such as mineral mining, ore benefication, production of phosphate Fertilizers, water treatment and purification, and oil and gas production. The majority of radionuclides in TENORM are found in the uranium and thorium decay chains. Radium and its subsequent decay products (radon) are the principal radionuclides used in characterizing the redistribution of TENORM in the environment by human activity. We will briefly review other radionuclides occurring in nature (potassium and rubidium) that contribute primarily to background doses. TENORM is found in many waste streams; for example, scrap metal, sludges, slags, fluids, and is being discovered in industries traditionally not thought of as affected by radionuclide contamination. Not only the forms and volumes, but the levels of radioactivity in TENORM vary. Current discussions about the validity of the linear no dose threshold theory are central to the TENORM issue. TENORM is not regulated by the Atomic Energy Act or other Federal regulations. Control and regulation of TENORM is not consistent from industry to industry nor from state to state. Proposed regulations are moving from concentration-based standards to dose
Nonuniform sampling and maximum entropy reconstruction in multidimensional NMR.
Hoch, Jeffrey C; Maciejewski, Mark W; Mobli, Mehdi; Schuyler, Adam D; Stern, Alan S
2014-02-18
NMR spectroscopy is one of the most powerful and versatile analytic tools available to chemists. The discrete Fourier transform (DFT) played a seminal role in the development of modern NMR, including the multidimensional methods that are essential for characterizing complex biomolecules. However, it suffers from well-known limitations: chiefly the difficulty in obtaining high-resolution spectral estimates from short data records. Because the time required to perform an experiment is proportional to the number of data samples, this problem imposes a sampling burden for multidimensional NMR experiments. At high magnetic field, where spectral dispersion is greatest, the problem becomes particularly acute. Consequently multidimensional NMR experiments that rely on the DFT must either sacrifice resolution in order to be completed in reasonable time or use inordinate amounts of time to achieve the potential resolution afforded by high-field magnets. Maximum entropy (MaxEnt) reconstruction is a non-Fourier method of spectrum analysis that can provide high-resolution spectral estimates from short data records. It can also be used with nonuniformly sampled data sets. Since resolution is substantially determined by the largest evolution time sampled, nonuniform sampling enables high resolution while avoiding the need to uniformly sample at large numbers of evolution times. The Nyquist sampling theorem does not apply to nonuniformly sampled data, and artifacts that occur with the use of nonuniform sampling can be viewed as frequency-aliased signals. Strategies for suppressing nonuniform sampling artifacts include the careful design of the sampling scheme and special methods for computing the spectrum. Researchers now routinely report that they can complete an N-dimensional NMR experiment 3(N-1) times faster (a 3D experiment in one ninth of the time). As a result, high-resolution three- and four-dimensional experiments that were prohibitively time consuming are now practical
Alfafara, C G; Miura, K; Shimizu, H; Shioya, S; Suga, K; Suzuki, K
1993-02-20
A fuzzy logic controller (FLC) for the control of ethanol concentration was developed and utilized to realize the maximum production of glutathione (GSH) in yeast fedbatch culture. A conventional fuzzy controller, which uses the control error and its rate of change in the premise part of the linguistic rules, worked well when the initial error of ethanol concentration was small. However, when the initial error was large, controller overreaction resulted in an overshoot.An improved fuzzy controller was obtained to avoid controller overreaction by diagnostic determination of "glucose emergency states" (i.e., glucose accumulation or deficiency), and then appropriate emergency control action was obtained by the use of weight coefficients and modification of linguistic rules to decrease the overreaction of the controller when the fermentation was in the emergency state. The improved fuzzy controller was able to control a constant ethanol concentration under conditions of large initial error.The improved fuzzy control system was used in the GSH production phase of the optimal operation to indirectly control the specific growth rate mu to its critical value micro(c). In the GSH production phase of the fed-batch culture, the optimal solution was to control micro to micro(c) in order to maintain a maximum specific GSH production rate. The value of micro(c) also coincided with the critical specific growth rate at which no ethanol formation occurs. Therefore, the control of micro to micro(c) could be done indirectly by maintaining a constant ethanol concentration, that is, zero net ethanol formation, through proper manipulation of the glucose feed rate. Maximum production of GSH was realized using the developed FLC; maximum production was a consequence of the substrate feeding strategy and cysteine addition, and the FLC was a simple way to realize the strategy.
Maximum Time Separation of Events in Cyclic Systems with Linear and Latest Timing Constraints
Jin, Fen; Hulgaard, Henrik; Cerny, Eduard
1998-01-01
The determination of the maximum time separations of events is important in the design, synthesis, and verification of digital systems, especially in interface timing verification. Many researchers have explored solutions to the problem with various restrictions: a) on the type of constraints......, and b) on whether the events in the specification are allowed to occur repeatedly. When the events can occur only once, the problem is well solved. There are fewer concrete results for systems where the events can occur repeatedly. We extend the work by Hulgaard et al.\\ for computing the maximum...
Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles
Paulo H. Egydio
2008-01-01
Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.
Novel TPPO Based Maximum Power Point Method for Photovoltaic System
ABBASI, M. A.
2017-08-01
Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
36 CFR 212.10 - Maximum economy National Forest System roads.
2010-07-01
... 36 Parks, Forests, and Public Property 2 2010-07-01 2010-07-01 false Maximum economy National... economy National Forest System roads. The Chief may acquire, construct, reconstruct, improve, and maintain... Forest Service in locations and according to specifications which will permit maximum economy in...
R Wave Extraction Based on the Maximum First Derivative plus the Maximum Value of the Double Search
Wen-po Yao; Wen-li Yao; Min Wu; Tie-bing Liu
2016-01-01
R-wave detection is the main approach for heart rate variability analysis and clinical application based on R-R interval. The maximum ifrst derivative plus the maximum value of the double search algorithm is applied on electrocardiogram (ECG) of MIH-BIT Arrhythmia Database to extract R wave. Through the study of algorithm's characteristics and R-wave detection method, data segmentation method is modified to improve the detection accuracy. After segmentation modification, average accuracy rate of 6 sets of short ECG data increase from 82.51% to 93.70%, and the average accuracy rate of 11 groups long-range data is 96.61%. Test results prove that the algorithm and segmentation method can accurately locate R wave and have good effectiveness and versatility, but may exist some undetected problems due to algorithm implementation.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non‐combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague‐to‐crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly‐cluttered scenarios and results in an orders‐of‐magnitude improvement in signal‐ to‐clutter ratio.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Mud Flow Characteristics Occurred in Izuoshima Island, 2013
Takebayashi, H.; Egashira, S.; Fujita, M.
2015-12-01
Landslides and mud flows were occurred in the west part of the Izuoshima Island, Japan on 16 October 2013. The Izuoshima Island is a volcanic island and the land surface is covered by the volcanic ash sediment in 1m depth. Hence, the mud flow with high sediment concentration was formed. The laminar layer is formed in the debris flow from the bed to the fluid surface. On the other hand, the laminar flow is restricted near the bed in the mud flow and the turbulence flow is formed on the laminar flow layer. As a result, the equilibrium slope of the mud flow becomes smaller comparing to the debris flow. In this study, the numerical analysis mud flow model considering the effect of turbulence flow on the equilibrium slope of the mud flow is developed. Subsequently, the model is applied to the mud flow occurred in the Izuoshima Island and discussed the applicability of the model and the flow characteristics of the mud flow. The differences of the horizontal flow areas between the simulated results and the field data are compared and it was found that the outline of the horizontal shape of the flow areas is reproduced well. Furthermore, the horizontal distribution of the erosion and deposition area is reproduced by the numerical analysis well except for the residential area (Kandachi area). Kandachi area is judged as the erosion area by the field observation, but the sediment was deposited in the numerical analysis. It is considered that the 1.5hour heavy rain over 100mm/h after the mud flow makes the discrepancy. The difference of the horizontal distribution of the maximum flow surface elevation between the simulated results and the field data are compared and it was found that the simulated flow depth is overestimated slightly, because of the wider erosion area due to the coarse resolution elevation data. The averaged velocity and the depth of the mud flow was enough large to collapse the houses.
Boundary condition effects on maximum groundwater withdrawal in coastal aquifers.
Lu, Chunhui; Chen, Yiming; Luo, Jian
2012-01-01
Prevention of sea water intrusion in coastal aquifers subject to groundwater withdrawal requires optimization of well pumping rates to maximize the water supply while avoiding sea water intrusion. Boundary conditions and the aquifer domain size have significant influences on simulating flow and concentration fields and estimating maximum pumping rates. In this study, an analytical solution is derived based on the potential-flow theory for evaluating maximum groundwater pumping rates in a domain with a constant hydraulic head landward boundary. An empirical correction factor, which was introduced by Pool and Carrera (2011) to account for mixing in the case with a constant recharge rate boundary condition, is found also applicable for the case with a constant hydraulic head boundary condition, and therefore greatly improves the usefulness of the sharp-interface analytical solution. Comparing with the solution for a constant recharge rate boundary, we find that a constant hydraulic head boundary often yields larger estimations of the maximum pumping rate and when the domain size is five times greater than the distance between the well and the coastline, the effect of setting different landward boundary conditions becomes insignificant with a relative difference between two solutions less than 2.5%. These findings can serve as a preliminary guidance for conducting numerical simulations and designing tank-scale laboratory experiments for studying groundwater withdrawal problems in coastal aquifers with minimized boundary condition effects.
Minimizing Maximum Response Time and Delay Factor in Broadcast Scheduling
Chekuri, Chandra; Moseley, Benjamin
2009-01-01
We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\\eps)-speed for unit-sized pages and with (2+\\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\\eps)-speed and (4+\\eps)-speed respectively. In addition we show that the algori...
Estimating the Size and Timing of the Maximum Amplitude of Solar Cycle 24
Ke-Jun Li; Peng-Xin Gao; Tong-Wei Su
2005-01-01
A simple statistical method is used to estimate the size and timing of maximum amplitude of the next solar cycle (cycle 24). Presuming cycle 23 to be a short cycle (as is more likely), the minimum of cycle 24 should occur about December 2006 (±2 months) and the maximum, around March 2011 (±9 months),and the amplitude is 189.9 ± 15.5, if it is a fast riser, or about 136, if it is a slow riser. If we presume cycle 23 to be a long cycle (as is less likely), the minimum of cycle 24 should occur about June 2008 (±2 months) and the maximum, about February 2013 (±8 months) and the maximum will be about 137 or 80, according as the cycle is a fast riser or a slow riser.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
GROWTH OF NATURALLY OCCURING Listeria innocua IN COPPA DI TESTA
G. Merialdi
2010-06-01
Full Text Available Coppa di testa is a traditional cooked pork salami produced in different Italian regions. The main raw material is deboned meat of pork head with the addition of tongue and rind. After a long (3-5 h high temperature (97°C cooking, additives and flavourings are added and the salami is prepared. After cooling the salami is often portioned and vacuum- packaged. In this study the growth of naturally occurring contamination of Listeria innocua in three batches of vacuum packaged Coppa di testa, stored at 4°C for 80 days, is described. The average max was 0.24 (days-1 and the average doubling time was 2.87 days. The maximum growth level ranged from 4.90 to 8.17 (log10 cfu/g. These results indicate that Coppa di testa definitely supports the growth of Listeria innocua in the considered storage conditions. Taking into account that at 4°C Listeria monocytogenes strains are associated with higher grow rates than L. innocua, these results emphasize the importance of preventing Listeria monocytogenes contamination in the production stages following cooking.
Smith, K.P.; Blunt, D.L.; Williams, G.P. [Argonne National Lab., IL (United States). Environmental Assessment Div.; Tebes, C.L. [Univ. of Illinois, Urbana, IL (United States)
1996-09-01
A preliminary radiological dose assessment of equipment decontamination, subsurface disposal, landspreading, equipment smelting, and equipment burial was conducted to address concerns regarding the presence of naturally occurring radioactive materials (NORM) in production waste streams. The assessment estimated maximum individual dose equivalents for workers and the general public. Sensitivity analyses of certain input parameters also were conducted. On the basis of this assessment, it is concluded that (1) regulations requiring workers to wear respiratory protection during equipment cleaning operations are likely to result in lower worker doses, (2) underground injection and downhole encapsulation of NORM wastes present a negligible risk to the general public, and (3) potential doses to workers and the general public related to smelting NORM-contaminated equipment can be controlled by limiting the contamination level of the initial feed. It is recommended that (1) NORM wastes be further characterized to improve studies of potential radiological doses; (2) states be encouraged to permit subsurface disposal of NORM more readily, provided further assessments support this study; results; (3) further assessment of landspreading NORM wastes be conducted; and (4) the political, economic, sociological, and nonradiological issues related to smelting NORM-contaminated equipment be studied to fully examine the feasibility of this disposal option.
BINDER DRAINAGE TEST FOR POROUS MIXTURES MADE BY VARYING THE MAXIMUM AGGREGATE SIZES
Hardiman Hardiman
2004-01-01
Full Text Available Binder drainage occurs with mixes of small aggregate surface area particularly porous asphalt. The binder drainage test, developed by the Transport Research Laboratory, UK, is commonly used to set an upper limit on the acceptable binder content for a porous mix. This paper presents the results of a laboratory investigation to determine the effects of different binder types on the binder drainage characteristics of porous mix made of various maximum aggregate sizes 20, 14 and 10 mm. Two types of binder were used, conventional 60/70 pen bitumen, and styrene butadiene styrene (SBS modified bitumen. The amount of binder lost through drainage after three hours at the maximum mixing temperature were measured in duplicate for mixes of different maximum sizes and binder contents. The maximum mixing temperature adopted depends on the types of binder used. The retained binder is plotted against the initial mixed binder content, together with the line of equality where the retained binder equals the mixed binder content. The results indicate the significant contribution of using SBS modified bitumen to increase the target bitumen binder content. Their significance is discussed in terms of target binder content, the critical binder content, the maximum mixed binder content and the maximum retained binder content values obtained from the binder drainage test. It was concluded that increasing maximum aggregate sizes decrease the maximum retained binder content, critical binder content, target binder content, maximum mixed binder content, and mixed content for both binders, but however for all mixtures, SBS is the highest.
Estimating landscape carrying capacity through maximum clique analysis.
Donovan, Therese M; Warrington, Gregory S; Schwenk, W Scott; Dinitz, Jeffrey H
2012-12-01
Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Maximum Power Point Tracking of Photovoltaic System Using Intelligent Controller
Swathy C.S
2013-04-01
Full Text Available Photovoltaic systems normally use a maximum power point tracking (MPPT technique to continuously give forth the highest probable power to the load when the temperature and solar irradiationchanges occur. This subdues the problem of mismatch between the given load and the solar array. The energy conservation principle is used to obtain small signal model and transfer function. A simulationwork handling with MPPT controller, a DC/DC boost converter feeding a load is achieved. PI controller and fuzzy logic controllers were used as the MPPT controller, which controls the dc/dc converter. Simulations and experimental results showed excellent performance and were used for comparing PI controller and fuzzy logic controller.
Maximum Entropy Estimation of n-Year Extreme Waveheights
徐德伦; 张军; 郑桂珍
2004-01-01
A new method for estimating the n (50 or 100) -year return-period waveheight, namely, the extreme waveheightexpected to occur in n years, is presented on the basis of the maximum entropy principle. The main points of the method are as follows: ( 1 ) based on the Hamiltonian principle, a maximum entropy probability density function for the extreme waveheight H, f(H)= αHγe-βΗ4 is derived from a Lagrangian function subject to some necessary and rational constraints; (2) the parametersα,β, andγin the function are expressed in terms of the mean H, variance V = ( H - H)2and bias B = ( H- H)3; and (3) with H, V and B estimated from observed data, the n-year return-period wave height Hn is computed in accordance with the formula 1/1 - F(Hn) = n, where F(Hn) is defined as F(Hn) =n Hn Of(H)dH.Examples of estimating the 50 and 100-year retum period waveheights by the present method and by some currently used method from observed data acquired from two hydrographic stations are given. A comparison of the estimated results shows that the present method is superior to the others.
Radiation engineering of optical antennas for maximum field enhancement.
Seok, Tae Joon; Jamshidi, Arash; Kim, Myungki; Dhuey, Scott; Lakhani, Amit; Choo, Hyuck; Schuck, Peter James; Cabrini, Stefano; Schwartzberg, Adam M; Bokor, Jeffrey; Yablonovitch, Eli; Wu, Ming C
2011-07-13
Optical antennas have generated much interest in recent years due to their ability to focus optical energy beyond the diffraction limit, benefiting a broad range of applications such as sensitive photodetection, magnetic storage, and surface-enhanced Raman spectroscopy. To achieve the maximum field enhancement for an optical antenna, parameters such as the antenna dimensions, loading conditions, and coupling efficiency have been previously studied. Here, we present a framework, based on coupled-mode theory, to achieve maximum field enhancement in optical antennas through optimization of optical antennas' radiation characteristics. We demonstrate that the optimum condition is achieved when the radiation quality factor (Q(rad)) of optical antennas is matched to their absorption quality factor (Q(abs)). We achieve this condition experimentally by fabricating the optical antennas on a dielectric (SiO(2)) coated ground plane (metal substrate) and controlling the antenna radiation through optimizing the dielectric thickness. The dielectric thickness at which the matching condition occurs is approximately half of the quarter-wavelength thickness, typically used to achieve constructive interference, and leads to ∼20% higher field enhancement relative to a quarter-wavelength thick dielectric layer.
Maximum embryo absorbed dose from intravenous urography: interhospital variations
Damilakis, J.; Perisinakis, K. [University of Crete (Greece). Dept. of Medical Physics; Koukourakis, M. [University of Crete (Greece). Dept. of Radiology; Gourtsoyiannis, N. [University Hospital of Iraklion, Crete (Greece). Dept. of Radiotherapy
1997-12-01
The purpose of this study was to determine the maximum embryo dose during intravenous urography (IVU) examinations, when inadvertent irradiation of a pregnant woman occurs, and to investigate the variation of doses received from different institutions. Doses at average embryo depth from IVU examinations have been measured in four institutions using a Rando phantom and thermoluminescent crystals. In order to estimate the maximum range of embryo doses, radiologists were asked to carry out the examinations with the same technique as in female patients with acute ureteral obstruction. The range of doses estimated at embryo depth for the institutions participating in this study was 5.77 to 35.2 mGy. The considerable interhospital variation found in dose can be explained by different equipment and techniques used. A simple method of estimating embryo dose from pelvic radiographs reported previously was found to be also applicable to IVU examinations. Absorbed dose at 6 cm, the average embryo depth, was found significantly less than 50 mGy. (Author).
Ionization and maximum energy of nuclei in shock acceleration theory
Morlino, Giovanni
2011-01-01
We study the acceleration of heavy nuclei at SNR shocks when the process of ionization is taken into account. Heavy atoms ($Z_N >$ few) in the interstellar medium which start the diffusive shock acceleration (DSA) are never fully ionized at the moment of injection. The ionization occurs during the acceleration process, when atoms already move relativistically. For typical environment around SNRs the photo-ionization due to the background galactic radiation dominates over Coulomb collisions. The main consequence of ionization is the reduction of the maximum energy which ions can achieve with respect to the standard result of the DSA. In fact the photo-ionization has a timescale comparable to the beginning of the Sedov-Taylor phase, hence the maximum energy is no more proportional to the nuclear charge, as predicted by standard DSA, but rather to the effective ions' charge during the acceleration process, which is smaller than the total nuclear charge $Z_N$. This result can have a direct consequence in the pred...
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
郝晶莹; 王胜辉; 金月新; 郑洪
2015-01-01
Photovoltaic cells are devices for generating electric energy in the photovoltaic power generation system. The photovoltaic cells under operation will present a typical non-linear characteristic with the influence of the environ-mental temperature,irradiance and other factors. Moreover,under different external conditions,photovoltaic cells are able to run on different and unique maximum power point. The most commonly used method of maximum power point tracking was analyzed in this paper,and a new tracking method of maximum power was proposed which could realize the maximum power point fast tracking and solve the oscillation problem during the tracking process. The con-trol effectiveness is verified by Matlab/Simulink simulation,and good output waveform is obtained.%在光伏发电系统中光伏电池板是产生电能的装置，光伏电池运行受外界环境温度、辐照度等因素的影响，呈现出典型的非线性特征。外界条件不同时，光伏电池可运行在不同且唯一的最大功率点上。分析了最常用的最大功率点跟踪方法。并给出了一种新的最大功率跟踪方法，新方法能够快速跟踪到最大功率点，并且解决了跟踪过程的振荡问题。最后通过Matlab／Simulink仿真验证了控制有效性，得到了较好的输出波形。
Modeling the Maximum Spreading of Liquid Droplets Impacting Wetting and Nonwetting Surfaces.
Lee, Jae Bong; Derome, Dominique; Guyer, Robert; Carmeliet, Jan
2016-02-09
Droplet impact has been imaged on different rigid, smooth, and rough substrates for three liquids with different viscosity and surface tension, with special attention to the lower impact velocity range. Of all studied parameters, only surface tension and viscosity, thus the liquid properties, clearly play a role in terms of the attained maximum spreading ratio of the impacting droplet. Surface roughness and type of surface (steel, aluminum, and parafilm) slightly affect the dynamic wettability and maximum spreading at low impact velocity. The dynamic contact angle at maximum spreading has been identified to properly characterize this dynamic spreading process, especially at low impact velocity where dynamic wetting plays an important role. The dynamic contact angle is found to be generally higher than the equilibrium contact angle, showing that statically wetting surfaces can become less wetting or even nonwetting under dynamic droplet impact. An improved energy balance model for maximum spreading ratio is proposed based on a correct analytical modeling of the time at maximum spreading, which determines the viscous dissipation. Experiments show that the time at maximum spreading decreases with impact velocity depending on the surface tension of the liquid, and a scaling with maximum spreading diameter and surface tension is proposed. A second improvement is based on the use of the dynamic contact angle at maximum spreading, instead of quasi-static contact angles, to describe the dynamic wetting process at low impact velocity. This improved model showed good agreement compared to experiments for the maximum spreading ratio versus impact velocity for different liquids, and a better prediction compared to other models in literature. In particular, scaling according to We(1/2) is found invalid for low velocities, since the curves bend over to higher maximum spreading ratios due to the dynamic wetting process.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Profiles in patient safety: when an error occurs.
Hobgood, Cherri; Hevia, Armando; Hinchey, Paul
2004-07-01
Medical error is now clearly established as one of the most significant problems facing the American health care system. Anecdotal evidence, studies of human cognition, and analysis of high-reliability organizations all predict that despite excellent training, human error is unavoidable. When an error occurs and is recognized, providers have a duty to disclose the error. Yet disclosure of error to patients, families, and hospital colleagues is a difficult and/or threatening process for most physicians. A more thorough understanding of the ethical and social contract between physicians and their patients as well as the professional milieu surrounding an error may improve the likelihood of its disclosure. Key among these is the identification of institutional factors that support disclosure and recognize error as an unavoidable part of the practice of medicine. Using a case-based format, this article focuses on the communication of error with patients, families, and colleagues and grounds error disclosure in the cultural milieu of medial ethics.
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
A comparison of substorms occurring during magnetic storms with those occurring during quiet times
McPherron, R. L.; Hsu, T.-S.
2002-09-01
It has been suggested that there may be a fundamental difference between substorms that occur during magnetic storms and those that occur at other times. [1996] presented evidence that there is no obvious change in lobe field in "quiet time" substorms but that "storm time" substorms exhibit the classic pattern of storage and release of lobe field energy. This result led them to speculate that the former are caused by current sheet disruption, while the latter are caused by reconnection of lobe flux. In this paper we examine their hypothesis with a much larger data set using definitions of the two types of substorms similar to theirs, as well as additional more restrictive definitions of these classes of events. Our results show that the only differences between the various classes are the absolute value of the lobe field and the size of the changes. When the data are normalized to unit field amplitude, we find that the percent change during storm time and non-storm time substorms is nearly the same. The above conclusions are demonstrated with superposed epoch analysis of lobe field (Bt and Bz) for four classes of substorms: active times (Dst -25 nT), and quiet time substorms (no evidence of storm in Dst). Epoch zero for the analysis was taken as the main substorm onset (Pi2 onset closest to sharp break in AL index). Our results suggest that there is no qualitative distinction between the various classes of substorms, and so they are all likely to be caused by the same mechanism.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Shape Modelling Using Maximum Autocorrelation Factors
Larsen, Rasmus
2001-01-01
of the training set are in reality a time series, e.g.\\$\\backslash\\$ snapshots of a beating heart during the cardiac cycle or when the shapes are slices of a 3D structure, e.g. the spinal cord. Second, in almost all applications a natural order of the landmark points along the contour of the shape is introduced......This paper addresses the problems of generating a low dimensional representation of the shape variation present in a training set after alignment using Procrustes analysis and projection into shape tangent space. We will extend the use of principal components analysis in the original formulation...... of Active Shape Models by Timothy Cootes and Christopher Taylor by building new information into the model. This new information consists of two types of prior knowledge. First, in many situation we will be given an ordering of the shapes of the training set. This situation occurs when the shapes...
Verdon-Kidd, D. C.; Kiem, A. S.
2015-12-01
Rainfall intensity-frequency-duration (IFD) relationships are commonly required for the design and planning of water supply and management systems around the world. Currently, IFD information is based on the "stationary climate assumption" that weather at any point in time will vary randomly and that the underlying climate statistics (including both averages and extremes) will remain constant irrespective of the period of record. However, the validity of this assumption has been questioned over the last 15 years, particularly in Australia, following an improved understanding of the significant impact of climate variability and change occurring on interannual to multidecadal timescales. This paper provides evidence of regime shifts in annual maximum rainfall time series (between 1913-2010) using 96 daily rainfall stations and 66 sub-daily rainfall stations across Australia. Furthermore, the effect of these regime shifts on the resulting IFD estimates are explored for three long-term (1913-2010) sub-daily rainfall records (Brisbane, Sydney, and Melbourne) utilizing insights into multidecadal climate variability. It is demonstrated that IFD relationships may under- or over-estimate the design rainfall depending on the length and time period spanned by the rainfall data used to develop the IFD information. It is recommended that regime shifts in annual maximum rainfall be explicitly considered and appropriately treated in the ongoing revisions of the Engineers Australia guide to estimating and utilizing IFD information, Australian Rainfall and Runoff (ARR), and that clear guidance needs to be provided on how to deal with the issue of regime shifts in extreme events (irrespective of whether this is due to natural or anthropogenic climate change). The findings of our study also have important implications for other regions of the world that exhibit considerable hydroclimatic variability and where IFD information is based on relatively short data sets.
Promoter recognition based on the maximum entropy hidden Markov model.
Zhao, Xiao-yu; Zhang, Jin; Chen, Yuan-yuan; Li, Qiang; Yang, Tao; Pian, Cong; Zhang, Liang-yun
2014-08-01
Since the fast development of genome sequencing has produced large scale data, the current work uses the bioinformatics methods to recognize different gene regions, such as exon, intron and promoter, which play an important role in gene regulations. In this paper, we introduce a new method based on the maximum entropy Markov model (MEMM) to recognize the promoter, which utilizes the biological features of the promoter for the condition. However, it leads to a high false positive rate (FPR). In order to reduce the FPR, we provide another new method based on the maximum entropy hidden Markov model (ME-HMM) without the independence assumption, which could also accommodate the biological features effectively. To demonstrate the precision, the new methods are implemented by R language and the hidden Markov model (HMM) is introduced for comparison. The experimental results show that the new methods may not only overcome the shortcomings of HMM, but also have their own advantages. The results indicate that, MEMM is excellent for identifying the conserved signals, and ME-HMM can demonstrably improve the true positive rate.
Maximum entropy production in environmental and ecological systems.
Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M
2010-05-12
The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.
Artificial Neural Network In Maximum Power Point Tracking Algorithm Of Photovoltaic Systems
Modestas Pikutis
2014-05-01
Full Text Available Scientists are looking for ways to improve the efficiency of solar cells all the time. The efficiency of solar cells which are available to the general public is up to 20%. Part of the solar energy is unused and a capacity of solar power plant is significantly reduced – if slow controller or controller which cannot stay at maximum power point of solar modules is used. Various algorithms of maximum power point tracking were created, but mostly algorithms are slow or make mistakes. In the literature more and more oftenartificial neural networks (ANN in maximum power point tracking process are mentioned, in order to improve performance of the controller. Self-learner artificial neural network and IncCond algorithm were used for maximum power point tracking in created solar power plant model. The algorithm for control was created. Solar power plant model is implemented in Matlab/Simulink environment.
Application of Maximum Entropy Deconvolution to ${\\gamma}$-ray Skymaps
Raab, Susanne
2015-01-01
Skymaps measured with imaging atmospheric Cherenkov telescopes (IACTs) represent the real source distribution convolved with the point spread function of the observing instrument. Current IACTs have an angular resolution in the order of 0.1$^\\circ$ which is rather large for the study of morphological structures and for comparing the morphology in $\\gamma$-rays to measurements in other wavelengths where the instruments have better angular resolutions. Serendipitously it is possible to approximate the underlying true source distribution by applying a deconvolution algorithm to the observed skymap, thus effectively improving the instruments angular resolution. From the multitude of existing deconvolution algorithms several are already used in astronomy, but in the special case of $\\gamma$-ray astronomy most of these algorithms are challenged due to the high noise level within the measured data. One promising algorithm for the application to $\\gamma$-ray data is the Maximum Entropy Algorithm. The advantages of th...
Adaptive edge image enhancement based on maximum fuzzy entropy
ZHANG Xiu-hua; YANG Kun-tao
2006-01-01
Based on the maximum fuzzy entropy principle,the edge image with low contrast is optimally classified into two classes adaptively,under the condition of probability partition and fuzzy partition.The optimal threshold is used as the classified threshold value,and a local parametric gray-level transformation is applied to the obtained classes.By means of two parameters representing,the homogeneity of the regions in edge image is improved.The excellent performance of the proposed technique is exercisable through simulation results on a set of test images.It is shown how the extracted and enhanced edges provide an efficient edge-representation of images.It is shown that the proposed technique possesses excellent performance in homogeneity through simulations on a set of test images,and the extracted and enhanced edges provide an efficient edge-representation of images.
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq
2012-06-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Exponential and double exponential tails for maximum of two-dimensional discrete Gaussian free field
Ding, Jian
2011-01-01
We study the tail behavior for the maximum of discrete Gaussian free field on a 2D box with Dirichlet boundary condition after centering by its expectation. We show that it exhibits an exponential decay for the right tail and a double exponential decay for the left tail. In particular, our result implies that the variance of the maximum is of order 1, improving an $o(\\log n)$ bound by Chatterjee (2008) and confirming a folklore conjecture. An important ingredient for our proof is a result of Bramson and Zeitouni (2010), who proved the tightness of the centered maximum together with an evaluation of the expectation up to an additive constant.
The Change in the Maximum Wind Speed and the Impact of it on Agricultural Production
WU Jian-mei; SUN Jin-sen; SUI Gui-ling; XIE Su-he; WANG Meng
2012-01-01
Using the data on the maximum wind speed within ten minutes every month in the period 1971-2009 in Zhucheng City of Shandong Province, we conduct statistical analysis of the maximum wind speed in Zhucheng City. The results show that over thirty-nine years, the annual maximum wind speed in four seasons in Zhucheng City tends to decline. The annual maximum wind speed declines at the rate of 1.45 m/s every 10 years. It falls fastest in winter, with decline rate of 1.73 m/s every 10 years; it is close to the average annual maximum wind speed in spring and autumn, with decline rate of 1.44 m/s and 14.8 m/s every 10 years, respectively; it falls slowest in summer, and the extreme value of the maximum wind speed occurs mainly in spring. The curve of changes in the monthly maximum wind speed in Zhucheng City assumes diminishing shape of "two peaks and one trough". We conduct preliminary analysis of the windy weather situation, and put forth specific defensive measures against the hazards of strong winds in the different periods.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
A probabilistic estimate of maximum acceleration in rock in the contiguous United States
Algermissen, Sylvester Theodore; Perkins, David M.
1976-01-01
This paper presents a probabilistic estimate of the maximum ground acceleration to be expected from earthquakes occurring in the contiguous United States. It is based primarily upon the historic seismic record which ranges from very incomplete before 1930 to moderately complete after 1960. Geologic data, primarily distribution of faults, have been employed only to a minor extent, because most such data have not been interpreted yet with earthquake hazard evaluation in mind.The map provides a preliminary estimate of the relative hazard in various parts of the country. The report provides a method for evaluating the relative importance of the many parameters and assumptions in hazard analysis. The map and methods of evaluation described reflect the current state of understanding and are intended to be useful for engineering purposes in reducing the effects of earthquakes on buildings and other structures.Studies are underway on improved methods for evaluating the relativ( earthquake hazard of different regions. Comments on this paper are invited to help guide future research and revisions of the accompanying map.The earthquake hazard in the United States has been estimated in a variety of ways since the initial effort by Ulrich (see Roberts and Ulrich, 1950). In general, the earlier maps provided an estimate of the severity of ground shaking or damage but the frequency of occurrence of the shaking or damage was not given. Ulrich's map showed the distribution of expected damage in terms of no damage (zone 0), minor damage (zone 1), moderate damage (zone 2), and major damage (zone 3). The zones were not defined further and the frequency of occurrence of damage was not suggested. Richter (1959) and Algermissen (1969) estimated the ground motion in terms of maximum Modified Mercalli intensity. Richter used the terms "occasional" and "frequent" to characterize intensity IX shaking and Algermissen included recurrence curves for various parts of the country in the paper
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
王雪丽; 陶剑; 史宁中
2005-01-01
The primary goal of a phase I clinical trial is to find the maximum tolerable dose of a treatment. In this paper, we propose a new stepwise method based on confidence bound and information incorporation to determine the maximum tolerable dose among given dose levels. On the one hand, in order to avoid severe even fatal toxicity to occur and reduce the experimental subjects, the new method is executed from the lowest dose level, and then goes on in a stepwise fashion. On the other hand,in order to improve the accuracy of the recommendation, the final recommendation of the maximum tolerable dose is accomplished through the information incorporation of an additional experimental cohort at the same dose level. Furthermore, empirical simulation results show that the new method has some real advantages in comparison with the modified continual reassessment method.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Iron deficiency occurs frequently in children with cystic fibrosis.
Uijterschout, Lieke; Nuijsink, Marianne; Hendriks, Daniëlle; Vos, Rimke; Brus, Frank
2014-05-01
In adult CF patients iron deficiency (ID) is common and primarily functional due to chronic inflammation. No recent data are available on the cause of ID and iron deficiency anemia (IDA) in children with CF. Over the last decades onset of inflammation and pulmonary disease in children with CF is delayed by improved nutritional status. We questioned whether ID occurs in the same extent among children with CF as in adult CF patients. We therefore conducted a study to investigate the iron status of children with CF and to determine whether ID and IDA are associated with dietary iron intake, lung disease severity and Pseudomonas aeruginosa (PA) infection. Clinical charts of 53 children with CF aged 0-16 were reviewed. Follow-up varied from 1 to 14 years with 343 annual observations in total. Thirty-two children (60.4%) were iron deficient in at least 1 year and ID was present in 84 of 343 observations (24.5%). In 2011 ID was present in 9 children (17.0%). Ten children (18.9%) were anemic in at least 1 year and anemia was present in 13 of 328 observations (4.0%). IDA was present in at least 1 year in 6 children (11.3%). Ferritin (Fer) was positively associated with age. Higher Fer values found in older children represent an increased state of inflammation, rather than an improved iron status, and might increase the relative contribution of functional ID. This study shows that ID is common in relatively healthy, well-nourished children with CF. The mechanism of ID in children with CF is currently unknown. A prospective study using both soluble transferrin receptor and Fer as indicators for ID will provide more insight in the incidence and causes of ID in children with CF.
Co-occurring disorders: policy and practice in Germany.
Hintz, Thomas; Mann, Karl
2006-01-01
The occurrence of substance use disorders (SUD) with other mental disorders-what is often referred to as co-occurring disorders (COD)-is a common phenomenon, but for a long time, little attention has been paid to this problem in Germany. During the last 25 years, however, COD awareness has increased due to a shift toward community-based services. Scientific research has also demonstrated the significance and clinical relevance of COD. High prevalence rates and evidence of poor clinical outcomes were found in German studies. Many practitioners as well as policymakers acknowledge that changes in systems of care are necessary to meet the requirements of COD patients. The traditional German system is currently divided into addiction services and mental health services (predominantly in inpatient settings), often resulting in ineffective sequential treatment for COD patients. Research demonstrates that integrative treatment models are more appropriate, and the division of services should be reorganized to help COD patients appropriately. Efforts have already been made to restructure healthcare systems toward a more flexible approach with improved networking between in- and outpatient services. A further issue is the general attitude toward SUD patients. Many practitioners continue to hold negative opinions (eg, "SUD patients are only weak-minded") or feel insecure when confronted with SUD. This results in SUD problems being frequently ignored or depreciated. Educational programs have been intensive over recent years to address this problem (eg, Fachkunde Sucht, an advanced training program on SUD). In general, treatment conditions for COD patients are improving, but further efforts are necessary. Guidelines and treatment strategies for COD patients have been recently published in Germany.
Observed Abrupt Changes in Minimum and Maximum Temperatures in Jordan in the 20th Century
Mohammad M. samdi
2006-01-01
Full Text Available This study examines changes in annual and seasonal mean (minimum and maximum temperatures variations in Jordan during the 20th century. The analyses focus on the time series records at the Amman Airport Meteorological (AAM station. The occurrence of abrupt changes and trends were examined using cumulative sum charts (CUSUM and bootstrapping and the Mann-Kendall rank test. Statistically significant abrupt changes and trends have been detected. Major change points in the mean minimum (night-time and mean maximum (day-time temperatures occurred in 1957 and 1967, respectively. A minor change point in the annual mean maximum temperature also occurred in 1954, which is essential agreement with the detected change in minimum temperature. The analysis showed a significant warming trend after the years 1957 and 1967 for the minimum and maximum temperatures, respectively. The analysis of maximum temperatures shows a significant warming trend after the year 1967 for the summer season with a rate of temperature increase of 0.038°C/year. The analysis of minimum temperatures shows a significant warming trend after the year 1957 for all seasons. Temperature and rainfall data from other stations in the country have been considered and showed similar changes.
Mass transfer trends occurring in engineered ex vivo tissue scaffolds.
Moore, Marc; Sarntinoranont, Malisa; McFetridge, Peter
2012-08-01
In vivo the vasculature provides an effective delivery system for cellular nutrients; however, artificial scaffolds have no such mechanism, and the ensuing limitations in mass transfer result in limited regeneration. In these investigations, the regional mass transfer properties that occur through a model scaffold derived from the human umbilical vein (HUV) were assessed. Our aim was to define the heterogeneous behavior associated with these regional variations, and to establish if different decellularization technologies can modulate transport conditions to improve microenvironmental conditions that enhance cell integration. The effect of three decellularization methods [Triton X-100 (TX100), sodium dodecyl sulfate (SDS), and acetone/ethanol (ACE/EtOH)] on mass transfer, cellular migration, proliferation, and metabolic activity were assessed. Results show that regional variation in tissue structure and composition significantly affects both mass transfer and cell function. ACE/EtOH decellularization was shown to increase albumin mass flux through the intima and proximate-medial region (0-250 μm) when compared with sections decellularized with TX100 or SDS; although, mass flux remained constant over all regions of the full tissue thickness when using TX100. Scaffolds decellularized with TX100 were shown to promote cell migration up to 146% further relative to SDS decellularized samples. These results show that depending on scaffold derivation and expectations for cellular integration, specificities of the decellularization chemistry affect the scaffold molecular architecture resulting in variable effects on mass transfer and cellular response.
Motor sequence learning occurs despite disrupted visual and proprioceptive feedback
Boyd Lara A
2008-07-01
Full Text Available Abstract Background Recent work has demonstrated the importance of proprioception for the development of internal representations of the forces encountered during a task. Evidence also exists for a significant role for proprioception in the execution of sequential movements. However, little work has explored the role of proprioceptive sensation during the learning of continuous movement sequences. Here, we report that the repeated segment of a continuous tracking task can be learned despite peripherally altered arm proprioception and severely restricted visual feedback regarding motor output. Methods Healthy adults practiced a continuous tracking task over 2 days. Half of the participants experienced vibration that altered proprioception of shoulder flexion/extension of the active tracking arm (experimental condition and half experienced vibration of the passive resting arm (control condition. Visual feedback was restricted for all participants. Retention testing was conducted on a separate day to assess motor learning. Results Regardless of vibration condition, participants learned the repeated segment demonstrated by significant improvements in accuracy for tracking repeated as compared to random continuous movement sequences. Conclusion These results suggest that with practice, participants were able to use residual afferent information to overcome initial interference of tracking ability related to altered proprioception and restricted visual feedback to learn a continuous motor sequence. Motor learning occurred despite an initial interference of tracking noted during acquisition practice.
Maximum covariance analysis to identify intraseasonal oscillations over tropical Brazil
Barreto, Naurinete J. C.; Mesquita, Michel d. S.; Mendes, David; Spyrides, Maria H. C.; Pedra, George U.; Lucio, Paulo S.
2017-09-01
A reliable prognosis of extreme precipitation events in the tropics is arguably challenging to obtain due to the interaction of meteorological systems at various time scales. A pivotal component of the global climate variability is the so-called intraseasonal oscillations, phenomena that occur between 20 and 100 days. The Madden-Julian Oscillation (MJO), which is directly related to the modulation of convective precipitation in the equatorial belt, is considered the primary oscillation in the tropical region. The aim of this study is to diagnose the connection between the MJO signal and the regional intraseasonal rainfall variability over tropical Brazil. This is achieved through the development of an index called Multivariate Intraseasonal Index for Tropical Brazil (MITB). This index is based on Maximum Covariance Analysis (MCA) applied to the filtered daily anomalies of rainfall data over tropical Brazil against a group of covariates consisting of: outgoing longwave radiation and the zonal component u of the wind at 850 and 200 hPa. The first two MCA modes, which were used to create the { MITB}_1 and { MITB}_2 indices, represent 65 and 16 % of the explained variance, respectively. The combined multivariate index was able to satisfactorily represent the pattern of intraseasonal variability over tropical Brazil, showing that there are periods of activation and inhibition of precipitation connected with the pattern of MJO propagation. The MITB index could potentially be used as a diagnostic tool for intraseasonal forecasting.
ZHANG Hong-lie; ZHANG Guo-yin; YAO Ai-hong
2010-01-01
This paper presents an algorithm that combines the chaos optimization algorithm with the maximum entropy(COA-ME)by using entropy model based on chaos algorithm,in which the maximum entropy is used as the second method of searching the excellent solution.The search direction is improved by chaos optimization algorithm and realizes the selective acceptance of wrong solution.The experimental result shows that the presented algorithm can be used in the partitioning of hardware/software of reconfigurable system.It effectively reduces the local extremum problem,and search speed as well as performance of partitioning is improved.
A maximum power point tracking algorithm for photovoltaic applications
Nelatury, Sudarshan R.; Gray, Robert
2013-05-01
The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States)
2016-02-22
The objectives of this report are; To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; Estimate the maximum concentration in a well located outside of the fill material; and Perform a sensitivity analysis of key parameters.
Global Maximum Power Point Tracking of Photovoltaic Array under Partial Shaded Conditions
G.Shobana, P. Sornadeepika, Dr. R. Ramaprabha
2013-07-01
Full Text Available Efficiency of the PV module can be improved by operating at its peak power point so that the maximum power can be delivered to the load under varying environmental conditions. This paper is mainly focused on the maximum power point tracking of solar photovoltaic array (PV under non uniform insolation conditions. A maximum power point tracker (MPPT is used for extracting the maximum power from the solar PV module and transferring that power to the load. The problem of maximum power point (MPP tracking becomes a problem when the array receives non uniform insolation. Cells under shade absorb a large amount of electric power generated by cells receiving high insolation and convert it into heat which may damage the low illuminated cells. To relieve the stress on shaded cells, bypass diodes are added across the modules. In such a case multiple peaks in voltagepower characteristics are observed. Classical MPPT methods are not effective due to their inability to discriminate between local and global maximum. In this paper, Global MPPT algorithm is proposed to track the global maximum power point of PV array under partial shaded conditions.
Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion
Poljak, Nikola
2016-11-01
The problem of determining the angle θ at which a point mass launched from ground level with a given speed v0 will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of θ = π/4, producing a maximum range of D max = v0 2 / g , with g being the free-fall acceleration. Conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion, with the most famous example being the Tarzan swing problem. The problem of determining the maximum distance of a point mass thrown from constant-speed circular motion is presented and analyzed in detail in this text. The calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing (underhand or overhand) that produce the maximum throw distance.
Investigation on the Maximum Power Point in Solar Panel Characteristics Due to Irradiance Changes
Abdullah, M. A.; Fauziah Toha, Siti; Ahmad, Salmiah
2017-03-01
One of the disadvantages of the photovoltaic module as compared to other renewable resources is the dynamic characteristics of solar irradiance due to inconsistency weather condition and surrounding temperature. Commonly, a photovoltaic power generation systems consist of an embedded control system to maximize the power generation due to the inconsistency in irradiance. In order to improve the simplicity of the power optimization control, this paper present the characteristic of Maximum Power Point with various irradiance levels for Maximum Power Point Tracking (MPPT). The technique requires a set of data from photovoltaic simulation model to be extrapolated as a standard relationship between irradiance and maximum power. The result shows that the relationship between irradiance and maximum power can be represented by a simplified quadratic equation. The first section in your paper
A Phenomenological Study on Threshold Improvement via Spatial Coupling
Takeuchi, Keigo; Kawabata, Tsutomu
2011-01-01
Kudekar et al. proved an interesting result in low-density parity-check (LDPC) convolutional codes: The belief-propagation (BP) threshold is boosted to the maximum-a-posteriori (MAP) threshold. Furthermore, the authors showed that the BP threshold for code-division multiple-access (CDMA) systems is improved up to a threshold below the optimal one via spatial coupling. In this letter, a phenomenological model for elucidating the essence of these phenomenon, called threshold improvement, is proposed. The main result implies that threshold improvement occurs for spatially-coupled general graphical models.
Estuarine nitrification: A naturally occurring fluidized bed reaction?
Owens, N. J. P.
1986-01-01
The rates of nitrification in the water column of the Tamar river estuary, southwest England have been measured using the incorporation of H 14CO 3 in samples with and without the inhibitor of nitrification, 2-chloro-6-(trichloromethyl) pyridine ( N-Serve). N-Serve proved successful in totally inhibiting NH 4-oxidizing bacteria but the activity of NO 2-oxidizing bacteria was inhibited by only 30%; other organisms were only slightly affected. Measurements of the nitrification rate made over the entire salinity range of the estuary (0-30‰) showed that maximum nitrification always coincided with the turbidity maximum. The field data suggest that the organisms responsible for nitrification were associated with periodically resuspended particulate material and that the turbidity maximum acts in a manner similar to a fluidized bed reactor. A dispersion model has been used to demonstrate that nitrification in the water column can account for 100% of the NO 2 maximum which is apparent down estuary from the turbidity maximum.
Cortical spreading depression occurs during elective neurosurgical procedures.
Carlson, Andrew P; William Shuttleworth, C; Mead, Brittany; Burlbaw, Brittany; Krasberg, Mark; Yonas, Howard
2017-01-01
OBJECTIVE Cortical spreading depression (CSD) has been observed with relatively high frequency in the period following human brain injury, including traumatic brain injury and ischemic/hemorrhagic stroke. These events are characterized by loss of ionic gradients through massive cellular depolarization, neuronal dysfunction (depression of electrocorticographic [ECoG] activity) and slow spread (2-5 mm/min) across the cortical surface. Previous data obtained in animals have suggested that even in the absence of underlying injury, neurosurgical manipulation can induce CSD and could potentially be a modifiable factor in neurosurgical injury. The authors report their initial experience with direct intraoperative ECoG monitoring for CSD. METHODS The authors prospectively enrolled patients undergoing elective craniotomy for supratentorial lesions in cases in which the surgical procedure was expected to last > 2 hours. These patients were monitored for CSD from the time of dural opening through the time of dural closure, using a standard 1 × 6 platinum electrode coupled with an AC or full-spectrum DC amplifier. The data were processed using standard techniques to evaluate for slow potential changes coupled with suppression of high-frequency ECoG propagating across the electrodes. Data were compared with CSD validated in previous intensive care unit (ICU) studies, to evaluate recording conditions most likely to permit CSD detection, and identify likely events during the course of neurosurgical procedures using standard criteria. RESULTS Eleven patients underwent ECoG monitoring during elective neurosurgical procedures. During the periods of monitoring, 2 definite CSDs were observed to occur in 1 patient and 8 suspicious events were detected in 4 patients. In other patients, either no events were observed or artifact limited interpretation of the data. The DC-coupled amplifier system represented an improvement in stability of data compared with AC-coupled systems. Compared
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
Understanding the Role of Reservoir Size on Probable Maximum Precipitation
Woldemichael, A. T.; Hossain, F.
2011-12-01
This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the
Naturally occurring radiation sources: existing or planned exposure situation?
Hedemann-Jensen, Per [Danish Decommissioning, DK-4000 Roskilde (Denmark)
2010-12-01
After more than fifteen years of application, ICRP Publication 60 has been revised. The revision was based upon the concept of 'controllable dose' as the dose or sum of doses to an individual from a particular source that can reasonably be controlled by whatever means. The new recommendations have been published as ICRP Publication 103. The European Basic Safety Standards as well as the International Basic Safety Standards are currently under revision as a result of the new recommendations from ICRP. According to the ICRP, there have been indications that some changes to the structure and terminology of the system of protection were desirable in order to improve clarity and utility. In particular the distinction between practices and interventions may not have been clearly understood and the ICRP now recognises three types of exposure situations, which replace the previous categorisation into practices and interventions. These exposure situations are intended to cover the entire range of exposure situations: (1) planned exposure, (2) existing exposure and (3) emergency exposure. There are situations of exposure to naturally occurring radiation sources in different occupations, e.g. exposure to radon and radon progeny in workplaces other than where the exposure is required by or is directly related to the work and aircrew exposed to cosmic radiation. In the European (Euratom) and the International Basic Safety Standards, these exposure situations are treated conceptually different-either as a planned exposure situation or as an existing exposure situation. This note reviews the change of exposure situations from Publication 60 to Publication 103 and the implications for the revision of both the International and the European Basic Safety Standards. The paper draws some conclusions on the classification of the exposure situations in the two basic safety standards based on a logical interpretation of the ICRP recommendations. It is recommended that the
Naturally Occuring Brands: a New Perspective on Place Marketing
Christine Wright-Isak
2010-01-01
Naturally Occurring Brands: A New Perspective on Place Marketing We suggest community types are "natural brands," because their differentiated imagery has meaning that influences consumer housing choices...
Sparks, R.B.; Stabin, M.G. [Oak Ridge Inst. for Science and Education, TN (United States)
1999-01-01
After administration of I-131 to the female patient, the possibility of radiation exposure of the embryo/fetus exists if the patient becomes pregnant while radioiodine remains in the body. Fetal radiation dose estimates for such cases were calculated. Doses were calculated for various maternal thyroid uptakes and time intervals between administration and conception, including euthyroid and hyperthyroid cases. The maximum fetal dose calculating was about 9.8E-03 mGy/MBq, which occurred with 100% maternal thyroid uptake and a 1 week interval between administration and conception. Placental crossover of the small amount of radioiodine remaining 90 days after conception was also considered. Such crossover could result in an additional fetal dose of 9.8E-05 mGy/MBq and a maximum fetal thyroid self dose of 3.5E-04 mGy/MBq.
Overview of Maximum Power Point Tracking Techniques for Photovoltaic Energy Production Systems
Koutroulis, Eftichios; Blaabjerg, Frede
2015-01-01
of photovoltaic sources during stochastically varying solar irradiation and ambient temperature conditions. Thus, the overall efficiency of the photovoltaic energy production system is increased. Numerous techniques have been presented during the last decade for implementing the maximum power point tracking......A substantial growth of the installed photovoltaic systems capacity has occurred around the world during the last decade, thus enhancing the availability of electric energy in an environmentally friendly way. The maximum power point tracking technique enables maximization of the energy production...... process in a photovoltaic system. This article provides an overview of the operating principles of these techniques, which are suited for either uniform or non-uniform solar irradiation conditions. The operational characteristics and implementation requirements of these maximum power point tracking...
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Maximum likelihood based classification of electron tomographic data.
Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan
2011-01-01
Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.
High precision Hugoniot measurements of D2 near maximum compression
Benage, John; Knudson, Marcus; Desjarlais, Michael
2015-11-01
The Hugoniot response of liquid deuterium has been widely studied due to its general importance and to the significant discrepancy in the inferred shock response obtained from early experiments. With improvements in dynamic compression platforms and experimental standards these results have converged and show general agreement with several equation of state (EOS) models, including quantum molecular dynamics (QMD) calculations within the Generalized Gradient Approximation (GGA). This approach to modeling the EOS has also proven quite successful for other materials and is rapidly becoming a standard approach. However, small differences remain among predictions obtained using different local and semi-local density functionals; these small differences show up in the deuterium Hugoniot at ~ 30-40 GPa near the region of maximum compression. Here we present experimental results focusing on that region of the Hugoniot and take advantage of advancements in the platform and standards, resulting in data with significantly higher precision than that obtained in previous studies. These new data may prove to distinguish between the subtle differences predicted by the various density functionals. Results of these experiments will be presented along with comparison to various QMD calculations. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Covariance of maximum likelihood evolutionary distances between sequences aligned pairwise.
Dessimoz, Christophe; Gil, Manuel
2008-06-23
The estimation of a distance between two biological sequences is a fundamental process in molecular evolution. It is usually performed by maximum likelihood (ML) on characters aligned either pairwise or jointly in a multiple sequence alignment (MSA). Estimators for the covariance of pairs from an MSA are known, but we are not aware of any solution for cases of pairs aligned independently. In large-scale analyses, it may be too costly to compute MSAs every time distances must be compared, and therefore a covariance estimator for distances estimated from pairs aligned independently is desirable. Knowledge of covariances improves any process that compares or combines distances, such as in generalized least-squares phylogenetic tree building, orthology inference, or lateral gene transfer detection. In this paper, we introduce an estimator for the covariance of distances from sequences aligned pairwise. Its performance is analyzed through extensive Monte Carlo simulations, and compared to the well-known variance estimator of ML distances. Our covariance estimator can be used together with the ML variance estimator to form covariance matrices. The estimator performs similarly to the ML variance estimator. In particular, it shows no sign of bias when sequence divergence is below 150 PAM units (i.e. above ~29% expected sequence identity). Above that distance, the covariances tend to be underestimated, but then ML variances are also underestimated.
Maximum flow-based resilience analysis: From component to system
Jin, Chong; Li, Ruiying; Kang, Rui
2017-01-01
Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135
Constructing Maximum Entropy Language Models for Movie Review Subjectivity Analysis
Bo Chen; Hui He; Jun Guo
2008-01-01
Document subjectivity analysis has become an important aspect of web text content mining. This problem is similar to traditional text categorization, thus many related classification techniques can be adapted here. However, there is one significant difference that more language or semantic information is required for better estimating the subjectivity of a document. Therefore, in this paper, our focuses are mainly on two aspects. One is how to extract useful and meaningful language features, and the other is how to construct appropriate language models efficiently for this special task. For the first issue, we conduct a Global-Filtering and Local-Weighting strategy to select and evaluate language features in a series of n-grams with different orders and within various distance-windows. For the second issue, we adopt Maximum Entropy (MaxEnt) modeling methods to construct our language model framework. Besides the classical MaxEnt models, we have also constructed two kinds of improved models with Gaussian and exponential priors respectively. Detailed experiments given in this paper show that with well selected and weighted language features, MaxEnt models with exponential priors are significantly more suitable for the text subjectivity analysis task.
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein
2001-02-01
The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with conventional landfills. This is the highest methane recovery rate per unit waste, and thus progress toward stabilization, documented anywhere for such a large waste mass. This high recovery rate is attributed to moisture, and elevated temperature attained inexpensively during startup. Economic analyses performed under Phase I of this NETL contract indicate ''greenhouse cost effectiveness'' to be excellent. Other benefits include substantial waste volume loss (over 30%) which translates to extended landfill life. Other environmental benefits include rapidly improved quality and stabilization (lowered pollutant levels) in liquid leachate which drains from the waste.
Maximum Power Point Tracking Using Adaptive Fuzzy Logic control for Photovoltaic System
Anass Ait Laachir
2015-01-01
Full Text Available This work presents an intelligent approach to the improvement and optimization of control performance of a photovoltaic system with maximum power point tracking based on fuzzy logic control. This control was compared with the conventional control based on Perturb &Observe algorithm. The results obtained in Matlab/Simulink under different conditions show a marked improvement in the performance of fuzzy control MPPT of the PV system.
Optimizing the top profile of a nanowire for maximum forward emission
Wang Dong-Lin; Yu Zhong-Yuan; Liu Yu-Min; Guo Xiao-Tao; Cao Gui; Feng Hao
2011-01-01
The optimal top structure of a nanowire quantum emitter single photon source is significant in improving performance.Based on the axial symmetry of a cylindrical nanowire,this paper optimizes the top profile of a nanowire for the maximum forward emission by combining the geometry projection method and the finite element method.The results indicate that the nanowire with a cambered top has the stronger emission in the forward direction,which is helpful to improve the photon collection efficiency.
Angular cheilitis occurring during orthodontic treatment: a case series.
Cross, David L; Short, Laura J
2008-12-01
Clinical experience has shown that angular cheilitis can occur during orthodontic treatment and may persist into retention, but the incidence of the condition is unknown. The purpose of this paper is to increase the awareness among clinicians of angular cheilitis occurring during orthodontic treatment. It also proposes a treatment regime which may be used.
CO2 maximum in the oxygen minimum zone (OMZ)
Paulmier, A.; Ruiz-Pino, D.; Garçon, V.
2011-02-01
Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O22225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the oxycline, by a "carbon excess" induced by a specific remineralization. Indeed, a possible co-existence of bacterial heterotrophic and autotrophic processes usually occurring at different depths could
CO2 maximum in the oxygen minimum zone (OMZ
V. Garçon
2011-02-01
Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence
OCCURRENCE OF HIGH-SPEED SOLAR WIND STREAMS OVER THE GRAND MODERN MAXIMUM
Mursula, K.; Holappa, L. [ReSoLVE Centre of Excellence, Department of Physics, University of Oulu (Finland); Lukianova, R., E-mail: kalevi.mursula@oulu.fi [Geophysical Center of Russian Academy of Science, Moscow (Russian Federation)
2015-03-01
In the declining phase of the solar cycle (SC), when the new-polarity fields of the solar poles are strengthened by the transport of same-signed magnetic flux from lower latitudes, the polar coronal holes expand and form non-axisymmetric extensions toward the solar equator. These extensions enhance the occurrence of high-speed solar wind (SW) streams (HSS) and related co-rotating interaction regions in the low-latitude heliosphere, and cause moderate, recurrent geomagnetic activity (GA) in the near-Earth space. Here, using a novel definition of GA at high (polar cap) latitudes and the longest record of magnetic observations at a polar cap station, we calculate the annually averaged SW speeds as proxies for the effective annual occurrence of HSS over the whole Grand Modern Maximum (GMM) from 1920s onward. We find that a period of high annual speeds (frequent occurrence of HSS) occurs in the declining phase of each of SCs 16-23. For most cycles the HSS activity clearly reaches a maximum in one year, suggesting that typically only one strong activation leading to a coronal hole extension is responsible for the HSS maximum. We find that the most persistent HSS activity occurred in the declining phase of SC 18. This suggests that cycle 19, which marks the sunspot maximum period of the GMM, was preceded by exceptionally strong polar fields during the previous sunspot minimum. This gives interesting support for the validity of solar dynamo theory during this dramatic period of solar magnetism.
The mechanics of granitoid systems and maximum entropy production rates.
Hobbs, Bruce E; Ord, Alison
2010-01-13
A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate.
S.B. Ross; R.E. Best; S.J. Maheras; T.I. McSweeney
2001-08-17
Accidents could occur during the transportation of spent nuclear fuel and high-level radioactive waste. This paper describes the risks and consequences to the public from accidents that are highly unlikely but that could have severe consequences. The impact of these accidents would include those to a collective population and to hypothetical maximally exposed individuals (MEIs). This document discusses accidents with conditions that have a chance of occurring more often than 1 in 10 million times in a year, called ''maximum reasonably foreseeable accidents''. Accidents and conditions less likely than this are not considered to be reasonably foreseeable.
In-shoe plantar tri-axial stress profiles during maximum-effort cutting maneuvers.
Cong, Yan; Lam, Wing Kai; Cheung, Jason Tak-Man; Zhang, Ming
2014-12-18
Soft tissue injuries, such as anterior cruciate ligament rupture, ankle sprain and foot skin problems, frequently occur during cutting maneuvers. These injuries are often regarded as associated with abnormal joint torque and interfacial friction caused by excessive external and in-shoe shear forces. This study simultaneously investigated the dynamic in-shoe localized plantar pressure and shear stress during lateral shuffling and 45° sidestep cutting maneuvers. Tri-axial force transducers were affixed at the first and second metatarsal heads, lateral forefoot, and heel regions in the midsole of a basketball shoe. Seventeen basketball players executed both cutting maneuvers with maximum efforts. Lateral shuffling cutting had a larger mediolateral braking force than 45° sidestep cutting. This large braking force was concentrated at the first metatarsal head, as indicated by its maximum medial shear stress (312.2 ± 157.0 kPa). During propulsion phase, peak shear stress occurred at the second metatarsal head (271.3 ± 124.3 kPa). Compared with lateral shuffling cutting, 45° sidestep cutting produced larger peak propulsion shear stress (463.0 ± 272.6 kPa) but smaller peak braking shear stress (184.8 ± 181.7 kPa), of which both were found at the first metatarsal head. During both cutting maneuvers, maximum medial and posterior shear stress occurred at the first metatarsal head, whereas maximum pressure occurred at the second metatarsal head. The first and second metatarsal heads sustained relatively high pressure and shear stress and were expected to be susceptible to plantar tissue discomfort or injury. Due to different stress distribution, distinct pressure and shear cushioning mechanisms in basketball footwear might be considered over different foot regions.
Improving Thermoelectric Properties of Nanowires Through Inhomogeneity
González, J. Eduardo; Sánchez, Vicenta; Wang, Chumin
2017-05-01
Inhomogeneity in nanowires can be present in the cross-section and/or by breaking the translational symmetry along the nanowire. In particular, the quasiperiodicity introduces an unusual class of electronic and phononic transport with a singular continuous eigenvalue spectrum and critically localized wave functions. In this work, the thermoelectricity in periodic and quasiperiodically segmented nanobelts and nanowires is addressed within the Boltzmann formalism by using a real-space renormalization plus convolution method developed for the Kubo-Greenwood formula, in which tight-binding and Born models are, respectively, used for the calculation of electric and lattice thermal conductivities. For periodic nanowires, we observe a maximum of the thermoelectric figure-of-merit ( ZT) in the temperature space, as occurred in the carrier concentration space. This maximum ZT can be improved by introducing into nanowires periodically arranged segments and an inhomogeneous cross-section. Finally, the quasiperiodically segmented nanowires reveal an even larger ZT in comparison with the periodic ones.
Nutrient maximums related to low oxygen concentrations in the southern Canada Basin
JIN Ming-ming; SHI Jiuxin; LU Yong; CHEN Jianfang; GAO Guoping; WU Jingfeng; ZHANG Haisheng
2005-01-01
The phenomenon of nutrient maximums at 70～200 m occurred only in the region of the Canada Basin among the world oceans. The prevailing hypothesis was that the direct injection of the low-temperature high-nutrient brines from the Chukchi Sea shelf (＜50 m) in winter provided the nutrient maximums. However, we found that there are five problems in the direct injection process. Formerly Jin et al. considered that the formation of nutrient maximums can be a process of locally long-term regeneration. Here we propose a regeneration-mixture process. Data of temperature, salinity, oxygen and nutrients were collected at three stations in the southern Canada Basin during the summer 1999 cruise. We identified the cores of the surface, near-surface, potential temperature maximum waters and Arctic Bottom Water by the diagrams and vertical profiles of salinity, potential temperature, oxygen and nutrients. The historical 129Ⅰ data indicated that the surface and near-surface waters were Pacific-origin, but the waters below the potential temperature maximum core depth was Atlantic-origin. Along with the correlation of nutrient maximums and very low oxygen contents in the near-surface water, we hypothesize that, the putative organic matter was decomposed to inorganic nutrients; and the Pacific water was mixed with the Atlantic water in the transition zone. The idea of the regeneration-mixture process agrees with the historical observations of no apparent seasonal changes, the smooth nutrient profiles, the lowest saturation of CaCO3 above 400 m, low rate of CFC-11 ventilation and 3H-3He ages of 8～18 a around the nutrient maximum depths.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
Manufactured Home Testing in Simulated and Naturally Occurring High Winds
W. D. Richins; T. K. Larson
2006-08-01
A typical double-wide manufactured home was tested in simulated and naturally occurring high winds to understand structural behavior and improve performance during severe windstorms. Seven (7) lateral load tests were conducted on a double-wide manufactured home at a remote field test site in Wyoming. An extensive instrumentation package monitored the overall behavior of the home and collected data vital to validating computational software for the manufactured housing industry. The tests were designed to approach the design load of the home without causing structural damage, thus allowing the behavior of the home to be accessed when the home was later exposed to high winds (to 80-mph). The data generally show near-linear initial system response with significant non-linear behavior as the applied loads increase. Load transfer across the marriage line is primarily compression. Racking, while present, is very small. Interface slip and shear displacement along the marriage line are nearly insignificant. Horizontal global displacements reached 0.6 inch. These tests were designed primarily to collect data necessary to calibrate a desktop analysis and design software tool, MHTool, under development at the Idaho National Laboratory specifically for manufactured housing. Currently available analysis tools are, for the most part, based on methods developed for “stick built” structures and are inappropriate for manufactured homes. The special materials utilized in manufactured homes, such as rigid adhesives used in the connection of the sheathing materials to the studs, significantly alter the behavior of manufactured homes under lateral loads. Previous full scale tests of laterally loaded manufactured homes confirm the contention that conventional analysis methods are not applicable. System behavior dominates the structural action of manufactured homes and its prediction requires a three dimensional analysis of the complete unit, including tiedowns. This project was
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.
Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng
2017-04-17
added with Gaussian distributed noise. Meanwhile clinical breast ultrasound images are used to visually evaluate the effectiveness of the method. To examine the performance, comparison tests between the proposed RSBF and six state-of-the-art methods for ultrasound speckle removal are performed on simulated ultrasound images with various noise and speckle levels. The results of the proposed RSBF are satisfying since the Gaussian noise and the Rayleigh speckle are greatly suppressed. The proposed method can improve the SNRs of the enhanced images to nearly 15 and 13 dB compared with images corrupted by speckle as well as images contaminated by speckle and noise under various SNR levels, respectively. The RSBF is effective in enhancing edge while smoothing the speckle and noise in clinical ultrasound images. In the comparison experiments, the proposed method demonstrates its superiority in accuracy and robustness for denoising and edge preserving under various levels of noise and speckle in terms of visual quality as well as numeric metrics, such as peak signal to noise ratio, SNR and root mean squared error. The experimental results show that the proposed method is effective for removing the speckle and the background noise in ultrasound images. The main reason is that it performs a "detect and replace" two-step mechanism. The advantages of the proposed RBSF lie in two aspects. Firstly, each central pixel is classified as noise, speckle or noise-free texture according to the absolute difference between the target pixel and the reference median. Subsequently, the Rayleigh-maximum-likelihood filter and the bilateral filter are switched to eliminate speckle and noise, respectively, while the noise-free pixels are unaltered. Therefore, it is implemented with better accuracy and robustness than the traditional methods. Generally, these traits declare that the proposed RSBF would have significant clinical application.
Does crater 'saturation equilibrium' occur in the solar system?
Hartmann, W. K.
1984-01-01
The similarity in crater densities on the most heavily cratered surfaces throughout the solar system is statistically examined and discussed in terms of a 'saturation equilibrium' being achieved by cratering processes. This hypothesis accounts for (1) the similarity in maximum relative crater density, below certain theoretically predicted values, on all heavily cratered surfaces; (2) a levelling off at this same relative density among 100-m scale craters in populations on lunar maria and other sparsely cratered lunar surfaces; and (3) the approximate uniformity of maximum relative densities on Saturn satellites. The lunar frontside upland crater population, sometimes described as a well-preserved production function useful for interpreting other planetary surfaces, is found not to be a production function. It was modified by intercrater plains at least partly formed by early upland basaltic lava flooding.
Giese, Heiner; Azizan, Amizon; Kümmel, Anne; Liao, Anping; Peter, Cyril P; Fonseca, João A; Hermann, Robert; Duarte, Tiago M; Büchs, Jochen
2014-02-01
In biotechnological screening and production, oxygen supply is a crucial parameter. Even though oxygen transfer is well documented for viscous cultivations in stirred tanks, little is known about the gas/liquid oxygen transfer in shake flask cultures that become increasingly viscous during cultivation. Especially the oxygen transfer into the liquid film, adhering on the shake flask wall, has not yet been described for such cultivations. In this study, the oxygen transfer of chemical and microbial model experiments was measured and the suitability of the widely applied film theory of Higbie was studied. With numerical simulations of Fick's law of diffusion, it was demonstrated that Higbie's film theory does not apply for cultivations which occur at viscosities up to 10 mPa s. For the first time, it was experimentally shown that the maximum oxygen transfer capacity OTRmax increases in shake flasks when viscosity is increased from 1 to 10 mPa s, leading to an improved oxygen supply for microorganisms. Additionally, the OTRmax does not significantly undermatch the OTRmax at waterlike viscosities, even at elevated viscosities of up to 80 mPa s. In this range, a shake flask is a somehow self-regulating system with respect to oxygen supply. This is in contrary to stirred tanks, where the oxygen supply is steadily reduced to only 5% at 80 mPa s. Since, the liquid film formation at shake flask walls inherently promotes the oxygen supply at moderate and at elevated viscosities, these results have significant implications for scale-up.
Transient dwarfism of soil fauna during the Paleocene-Eocene Thermal Maximum
Smith, J.J.; Hasiotis, S.T.; Kraus, M.J.; Woody, D.T.
2009-01-01
Soil organisms, as recorded by trace fossils in paleosols of the Willwood Formation, Wyoming, show significant body-size reductions and increased abundances during the Paleocene-Eocene Thermal Maximum (PETM). Paleobotanical, paleopedologic, and oxygen isotope studies indicate high temperatures during the PETM and sharp declines in precipitation compared with late Paleocene estimates. Insect and oligochaete burrows increase in abundance during the PETM, suggesting longer periods of soil development and improved drainage conditions. Crayfish burrows and molluscan body fossils, abundant below and above the PETM interval, are significantly less abundant during the PETM, likely because of drier floodplain conditions and lower water tables. Burrow diameters of the most abundant ichnofossils are 30-46% smaller within the PETM interval. As burrow size is a proxy for body size, significant reductions in burrow diameter suggest that their tracemakers were smaller bodied. Smaller body sizes may have resulted from higher subsurface temperatures, lower soil moisture conditions, or nutritionally deficient vegetation in the high-CO2 atmosphere inferred for the PETM. Smaller soil fauna co-occur with dwarf mammal taxa during the PETM; thus, a common forcing mechanism may have selected for small size in both above- and below-ground terrestrial communities. We predict that soil fauna have already shown reductions in size over the last 150 years of increased atmospheric CO2 and surface temperatures or that they will exhibit this pattern over the next century. We retrodict also that soil fauna across the Permian-Triassic and Triassic-Jurassic boundary events show significant size decreases because of similar forcing mechanisms driven by rapid global warming.
Takahashi, Jun; Takabe, Satoshi; Hukushima, Koji
2017-07-01
A recently proposed exact algorithm for the maximum independent set problem is analyzed. The typical running time is improved exponentially in some parameter regions compared to simple binary search. Furthermore, the algorithm overcomes the core transition point, where the conventional leaf removal algorithm fails, and works up to the replica symmetry breaking (RSB) transition point. This suggests that a leaf removal core itself is not enough for typical hardness in the random maximum independent set problem, providing further evidence for RSB being the obstacle for algorithms in general.
Maximum range of a projectile thrown from constant-speed circular motion
Poljak, Nikola
2016-01-01
The problem of determining the angle at which a point mass launched from ground level with a given speed is a standard exercise in mechanics. Similar, yet conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion. The problem of determining the maximum distance of a rock thrown from a rotating arm motion is presented and analyzed in detail in this text. The calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing (underhand or overhand) which produce the maximum throw distance.
Comparative Toxicology of Libby Amphibole and Naturally Occurring Asbestos
Summary sentence: Comparative toxicology of Libby amphibole (LA) and site-specific naturally occurring asbestos (NOA) provides new insights on physical properties influencing health effects and mechanisms of asbestos-induced inflammation, fibrosis, and tumorigenesis.Introduction/...
Comparative Toxicology of Libby Amphibole and Naturally Occurring Asbestos
Summary sentence: Comparative toxicology of Libby amphibole (LA) and site-specific naturally occurring asbestos (NOA) provides new insights on physical properties influencing health effects and mechanisms of asbestos-induced inflammation, fibrosis, and tumorigenesis.Introduction/...
FastTree 2--approximately maximum-likelihood trees for large alignments.
Morgan N Price
Full Text Available BACKGROUND: We recently described FastTree, a tool for inferring phylogenies for alignments with up to hundreds of thousands of sequences. Here, we describe improvements to FastTree that improve its accuracy without sacrificing scalability. METHODOLOGY/PRINCIPAL FINDINGS: Where FastTree 1 used nearest-neighbor interchanges (NNIs and the minimum-evolution criterion to improve the tree, FastTree 2 adds minimum-evolution subtree-pruning-regrafting (SPRs and maximum-likelihood NNIs. FastTree 2 uses heuristics to restrict the search for better trees and estimates a rate of evolution for each site (the "CAT" approximation. Nevertheless, for both simulated and genuine alignments, FastTree 2 is slightly more accurate than a standard implementation of maximum-likelihood NNIs (PhyML 3 with default settings. Although FastTree 2 is not quite as accurate as methods that use maximum-likelihood SPRs, most of the splits that disagree are poorly supported, and for large alignments, FastTree 2 is 100-1,000 times faster. FastTree 2 inferred a topology and likelihood-based local support values for 237,882 distinct 16S ribosomal RNAs on a desktop computer in 22 hours and 5.8 gigabytes of memory. CONCLUSIONS/SIGNIFICANCE: FastTree 2 allows the inference of maximum-likelihood phylogenies for huge alignments. FastTree 2 is freely available at http://www.microbesonline.org/fasttree.
Enhancement of the maximum proton energy by funnel-geometry target in laser-plasma interactions
Yang, Peng; Fan, Dapeng; Li, Yuxiao
2016-09-01
Enhancement of the maximum proton energy using a funnel-geometry target is demonstrated through particle simulations of laser-plasma interactions. When an intense short-pulse laser illuminate a thin foil target, the foil electrons are pushed by the laser ponderomotive force, and then form an electron cloud at the target rear surface. The electron cloud generates a strong electrostatic field, which accelerates the protons to high energies. If there is a hole in the rear of target, the shape of the electron cloud and the distribution of the protons will be affected by the protuberant part of the hole. In this paper, a funnel-geometry target is proposed to improve the maximum proton energy. Using particle-in-cell 2-dimensional simulations, the transverse electric field generated by the side wall of four different holes are calculated, and protons inside holes are restricted to specific shapes by these field. In the funnel-geometry target, more protons are restricted near the center of the longitudinal accelerating electric field, thus protons experiencing longer accelerating time and distance in the sheath field compared with that in a traditional cylinder hole target. Accordingly, more and higher energy protons are produced from the funnel-geometry target. The maximum proton energy is improved by about 4 MeV compared with a traditional cylinder-shaped hole target. The funnel-geometry target serves as a new method to improve the maximum proton energy in laser-plasma interactions.
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
Thermospheric density model biases at the 23rd sunspot maximum
Pardini, C.; Moe, K.; Anselmo, L.
2012-07-01
Uncertainties in the neutral density estimation are the major source of aerodynamic drag errors and one of the main limiting factors in the accuracy of the orbit prediction and determination process at low altitudes. Massive efforts have been made over the years to constantly improve the existing operational density models, or to create even more precise and sophisticated tools. Special attention has also been paid to research more appropriate solar and geomagnetic indices. However, the operational models still suffer from weakness. Even if a number of studies have been carried out in the last few years to define the performance improvements, further critical assessments are necessary to evaluate and compare the models at different altitudes and solar activity conditions. Taking advantage of the results of a previous study, an investigation of thermospheric density model biases during the last sunspot maximum (October 1999 - December 2002) was carried out by analyzing the semi-major axis decay of four satellites: Cosmos 2265, Cosmos 2332, SNOE and Clementine. Six thermospheric density models, widely used in spacecraft operations, were analyzed: JR-71, MSISE-90, NRLMSISE-00, GOST-2004, JB2006 and JB2008. During the time span considered, for each satellite and atmospheric density model, a fitted drag coefficient was solved for and then compared with the calculated physical drag coefficient. It was therefore possible to derive the average density biases of the thermospheric models during the maximum of the 23rd solar cycle. Below 500 km, all the models overestimated the average atmospheric density by amounts varying between +7% and +20%. This was an inevitable consequence of constructing thermospheric models from density data obtained by assuming a fixed drag coefficient, independent of altitude. Because the uncertainty affecting the drag coefficient measurements was about 3% at both 200 km and 480 km of altitude, the calculated air density biases below 500 km were
J Nemati; GH Majzoobi; S Sulaiman; BTHT Baharudin; MAAzmah Hanim
2014-01-01
In this study, annealed pure copper was extruded using equal channel angular extrusion (ECAE) for a maximum of eight passes. The fatigue resistance of extruded specimens was evaluated for different passes and applied stresses using fatigue tests, fractography, and metallography. The mechanical properties of the extruded material were obtained at a tensile test velocity of 0.5 mm/min. It was found that the maximum increase in strength occurred after the 2nd pass. The total increase in ultimate strength after eight passes was 94%. The results of fatigue tests indicated that a significant improvement in fatigue life occurred after the 2nd pass. In subsequent passes, the fatigue life con-tinued to improve but at a considerably lower rate. The improved fatigue life was dependent on the number of passes and applied stresses. For low stresses (or high-cycle fatigue), a maximum increase in fatigue resistance of approximately 500%was observed for the extruded material after eight passes, whereas a maximum fatigue resistance of 5000%was obtained for high-applied stresses (or low-cycle fatigue). Optical microscopic examinations revealed grain refinements in the range of 32 to 4 µm. A maximum increase in impact energy absorption of 100%was achieved after eight passes. Consistent results were obtained from fractography and metallography examinations of the ex-truded material during fatigue tests.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Fixed-parameter tractability of the maximum agreement supertree problem.
Guillemot, Sylvain; Berry, Vincent
2010-01-01
Given a set L of labels and a collection of rooted trees whose leaves are bijectively labeled by some elements of L, the Maximum Agreement Supertree (SMAST) problem is given as follows: find a tree T on a largest label set L(') is included in L that homeomorphically contains every input tree restricted to L('). The problem has phylogenetic applications to infer supertrees and perform tree congruence analyses. In this paper, we focus on the parameterized complexity of this NP-hard problem, considering different combinations of parameters as well as particular cases. We show that SMAST on k rooted binary trees on a label set of size n can be solved in O((8n)k) time, which is an improvement with respect to the previously known O(n3k2) time algorithm. In this case, we also give an O((2k)pkn2) time algorithm, where p is an upper bound on the number of leaves of L missing in a SMAST solution. This shows that SMAST can be solved efficiently when the input trees are mostly congruent. Then, for the particular case where any triple of leaves is contained in at least one input tree, we give O(4pn3) and O(3:12p + n4) time algorithms, obtaining the first fixed-parameter tractable algorithms on a single parameter for this problem. We also obtain intractability results for several combinations of parameters, thus indicating that it is unlikely that fixed-parameter tractable algorithms can be found in these particular cases.
2015-01-01
Publication# 231), Tampa, Fla.: University of South Florida , Louis de la Parte Florida Mental Health Institute, The National Implementation Research...Calif. 3 8.3 Dahlgren, Va. 1 2.8 Groton, Conn. 2 5.6 Jacksonville, Fla. 3 8.3 Key West, Fla. 2 5.6 Mayport, Fla. 1 2.8 Mid- South , Tenn. 1 2.8 Naples...last digit of your social security number? Fill in (0–9) Administrative Current SARP At which site are you currently located? 1. 29 Palms 2. 77
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Andrikopoulos, Nikolaos K; Kaliora, Andriana C; Assimopoulou, Andreana N; Papapeorgiou, Vassilios P
2003-05-01
Naturally occurring gums and resins with beneficial pharmaceutical and nutraceutical properties were tested for their possible protective effect against copper-induced LDL oxidation in vitro. Chiosmastic gum (CMG) (Pistacia lentiscus var. Chia resin) was the most effective in protecting human LDL from oxidation. The minimum and maximum doses for the saturation phenomena of inhibition of LDL oxidation were 2.5 mg and 50 mg CMG (75.3% and 99.9%, respectively). The methanol/water extract of CMG was the most effective compared with other solvent combinations. CMG when fractionated in order to determine a structure-activity relationship showed that the total mastic essential oil, collofonium-like residue and acidic fractions of CMG exhibited a high protective activity ranging from 65.0% to 77.8%. The other natural gums and resins (CMG resin 'liquid collection', P. terebinthus var. Chia resin, dammar resin, acacia gum, tragacanth gum, storax gum) also tested as above, showed 27.0%-78.8% of the maximum LDL protection. The other naturally occurring substances, i.e. triterpenes (amyrin, oleanolic acid, ursolic acid, lupeol, 18-a-glycyrrhetinic acid) and hydroxynaphthoquinones (naphthazarin, shikonin and alkannin) showed 53.5%-78.8% and 27.0%-64.1% LDL protective activity, respectively. The combination effects (68.7%-76.2% LDL protection) of ursolic-, oleanolic- and ursodeoxycholic- acids were almost equal to the effect (75.3%) of the CMG extract in comparable doses.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Potential role of motion for enhancing maximum output energy of triboelectric nanogenerator
Byun, Kyung-Eun; Lee, Min-Hyun; Cho, Yeonchoo; Nam, Seung-Geol; Shin, Hyeon-Jin; Park, Seongjun
2017-07-01
Although triboelectric nanogenerator (TENG) has been explored as one of the possible candidates for the auxiliary power source of portable and wearable devices, the output energy of a TENG is still insufficient to charge the devices with daily motion. Moreover, the fundamental aspects of the maximum possible energy of a TENG related with human motion are not understood systematically. Here, we confirmed the possibility of charging commercialized portable and wearable devices such as smart phones and smart watches by utilizing the mechanical energy generated by human motion. We confirmed by theoretical extraction that the maximum possible energy is related with specific form factors of a TENG. Furthermore, we experimentally demonstrated the effect of human motion in an aspect of the kinetic energy and impulse using varying velocity and elasticity, and clarified how to improve the maximum possible energy of a TENG. This study gives insight into design of a TENG to obtain a large amount of energy in a limited space.
B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy
Shilpa Dilipkumar
2015-03-01
Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.
Sullivan, Christopher J.; Sacks, Stanley; McKendrick, Karen; Banks, Steven; Sacks, Joann Y.; Stommel, Joseph
2007-01-01
This paper examines outcomes 12 months post-prison release for offenders with co-occurring disorders (n = 185) randomly assigned to either a mental health control treatment (C) or a modified therapeutic community (E). Significant between-group differences were not found for mental health measures, although improvements were observed for each…
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm
Megchelenbrink, Wout; Rossell, Sergio; Huynen, Martijn A.
2015-01-01
Motivation Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA), which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental “omics” data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more “flexible” metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions. Results Here, we propose Maximum Metabolic Flexibility (MMF) a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i) indeed, most of the measured fluxes agree with a high adaptability of the network, ii) this result can be used to further reduce the space of feasible solutions iii) this reduced space improves the quantitative predictions
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm.
Wout Megchelenbrink
Full Text Available Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA, which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental "omics" data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more "flexible" metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions.Here, we propose Maximum Metabolic Flexibility (MMF a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i indeed, most of the measured fluxes agree with a high adaptability of the network, ii this result can be used to further reduce the space of feasible solutions iii this reduced space improves the quantitative predictions made by FBA and
THE MAXIMUM ENERGY OF ACCELERATED PARTICLES IN RELATIVISTIC COLLISIONLESS SHOCKS
Sironi, Lorenzo [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Spitkovsky, Anatoly [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544-1001 (United States); Arons, Jonathan, E-mail: lsironi@cfa.harvard.edu [Department of Astronomy, Department of Physics, and Theoretical Astrophysics Center, University of California, Berkeley, CA 94720 (United States)
2013-07-01
The afterglow emission from gamma-ray bursts (GRBs) is usually interpreted as synchrotron radiation from electrons accelerated at the GRB external shock that propagates with relativistic velocities into the magnetized interstellar medium. By means of multi-dimensional particle-in-cell simulations, we investigate the acceleration performance of weakly magnetized relativistic shocks, in the magnetization range 0 {approx}< {sigma} {approx}< 10{sup -1}. The pre-shock magnetic field is orthogonal to the flow, as generically expected for relativistic shocks. We find that relativistic perpendicular shocks propagating in electron-positron plasmas are efficient particle accelerators if the magnetization is {sigma} {approx}< 10{sup -3}. For electron-ion plasmas, the transition to efficient acceleration occurs for {sigma} {approx}< 3 Multiplication-Sign 10{sup -5}. Here, the acceleration process proceeds similarly for the two species, since the electrons enter the shock nearly in equipartition with the ions, as a result of strong pre-heating in the self-generated upstream turbulence. In both electron-positron and electron-ion shocks, we find that the maximum energy of the accelerated particles scales in time as {epsilon}{sub max}{proportional_to}t {sup 1/2}. This scaling is shallower than the so-called (and commonly assumed) Bohm limit {epsilon}{sub max}{proportional_to}t, and it naturally results from the small-scale nature of the Weibel turbulence generated in the shock layer. In magnetized plasmas, the energy of the accelerated particles increases until it reaches a saturation value {epsilon}{sub sat}/{gamma}{sub 0} m{sub i}c {sup 2} {approx} {sigma}{sup -1/4}, where {gamma}{sub 0} m{sub i}c {sup 2} is the mean energy per particle in the upstream bulk flow. Further energization is prevented by the fact that the self-generated turbulence is confined within a finite region of thickness {proportional_to}{sigma}{sup -1/2} around the shock. Our results can provide physically
Skin picking disorder with co-occurring body dysmorphic disorder.
Grant, Jon E; Redden, Sarah A; Leppink, Eric W; Odlaug, Brian L
2015-09-01
There is clinical overlap between skin picking disorder (SPD) and body dysmorphic disorder (BDD), but little research has examined clinical and cognitive correlates of the two disorders when they co-occur. Of 55 participants with SPD recruited for a neurocognitive study and two pharmacological studies, 16 (29.1%) had co-occurring BDD. SPD participants with and without BDD were compared to each other and to 40 healthy volunteers on measures of symptom severity, social functioning, and cognitive assessments using the Stop-signal task (assessing response impulsivity) and the Intra-dimensional/Extra-dimensional Set Shift task (assessing cognitive flexibility). Individuals with SPD and BDD exhibited significantly worse picking, significantly worse overall psychosocial functioning, and significantly greater dysfunction on aspects of cognitive flexibility. These results indicate that when SPD co-occurs with BDD unique clinical and cognitive aspects of SPD may be more pronounced. Future work should explore possible subgroups in SPD and whether these predict different treatment outcomes.
Endometrial carcinoma occuring from polycystic ovary disease : A case report
Seong, Su Ok; Jeon, Woo Ki [Inje Univ. College of Medicine, Seoul (Korea, Republic of)
1996-12-01
Endometrial carcinoma usually occurs in postmenopausal women ; less than 5% occurs in women under the age of 40. Up to one quarter of endometrial carcinoma patients below this age have PCO(polycystic ovary disease, Stein-Leventhal syndrome). The increased incidence of endometrial carcinoma in patients with PCO is related to chronic estrogenic stimulation. We report MR imaging in one case of endometrial carcinoma occuring in a 23 year old woman with PCO and had complained of hypermenorrhea for about three years. On T2-weighted MR image the endometrial cavity was seen to be distended with protruded endometrial masses of intermediate signal intensity, and the junctional zone was disrupted beneath the masses. Both ovaries were best seen on T2-weighted MR imaging and showed multiple small peripheral cysts and low signal-intensity central stroma.
[Diagnosis of osteoporosis occurring in autoimmune thyroid gland disease].
Radojković, Ivan; Radojković, Jana; Djurica, Snezana
2005-10-01
Osteoporosis or porotic bone is a general, systemic bone disease, which is manifested by fracture as its consequence. The main characteristic of this disease is the loss of bone microarchitecture, bone mass reduction, and its increased fragility. The result, thereof, is susceptibility to fracture. Etiology of osteoporosis is polymorph. Its socio-medical importance is enormous, since there is one osteoporotic fracture every 20 sec. worldwide. Million and six hundred thousand osteoporotic fractures occur annualy throughout the world. Thyroid gland is susceptible to autoimmune reactions that lead to autoimmune diseases, just like many other organs. The autoimmune disorder is a final consequence of a failure, in some instance, within the crucial mechanism of regulation of self tissue tolerance. The main goal is to prove the presence of osteoporosis, its inexpensive and quick diagnostics; to make a distinction among the causes that lead to it. In addition, to indicate the importance of osteoporosis that is caused by normal, metabolic processes which are an inevitable part of ageing. Diagnosis of osteoporosis can be done through laboratory, which is a tiresome, time consuming task. Measurements of BMD could be also performed by using new devices. Osteometers could be constructed on the basis of X-ray photon energy or US. Utilization most contemporary one uses laser beam, and it approximates the distance of additional tissue that also absorbs part of energy changing absorption of the reception unit and thus making the measurement results accurate. In diagnosing BMD by osteometer, one faces with certain difficulties. When axial quantitative CT is used, the value may be falsely lower, because of the loss of energy absorbed by aorta which is often calcified in elderly people. In devices with transversal scanning, of the same nature and technology, a part of the energy is being absorbed by transversal and spinal vertebrals. After the research, one may conclude that the most
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC GIS Inventory (aka Ramona) — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Solar Panel Maximum Power Point Tracker for Power Utilities
Sandeep Banik,
2014-01-01
Full Text Available ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT is needed to operate the PV array at its maximum power point. The objective of this thesis project is to build a photovoltaic (PV array Of 121.6V DC Voltage(6 cell each 20V, 100watt And convert the DC voltage to Single phase 120v,50Hz AC voltage by switch mode power converter‘s and inverter‘s.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Pan, Sudip; Solà, Miquel; Chattaraj, Pratim K
2013-02-28
Hardness and electrophilicity values for several molecules involved in different chemical reactions are calculated at various levels of theory and by using different basis sets. Effects of these aspects as well as different approximations to the calculation of those values vis-à-vis the validity of the maximum hardness and minimum electrophilicity principles are analyzed in the cases of some representative reactions. Among 101 studied exothermic reactions, 61.4% and 69.3% of the reactions are found to obey the maximum hardness and minimum electrophilicity principles, respectively, when hardness of products and reactants is expressed in terms of their geometric means. However, when we use arithmetic mean, the percentage reduces to some extent. When we express the hardness in terms of scaled hardness, the percentage obeying maximum hardness principle improves. We have observed that maximum hardness principle is more likely to fail in the cases of very hard species like F(-), H(2), CH(4), N(2), and OH appearing in the reactant side and in most cases of the association reactions. Most of the association reactions obey the minimum electrophilicity principle nicely. The best results (69.3%) for the maximum hardness and minimum electrophilicity principles reject the 50% null hypothesis at the 2% level of significance.
Exponentially many maximum genus embeddings and genus embeddings for complete graphs
REN Han; BAI Yun
2008-01-01
There are many results on the maximum genus,among which most are written for the existence of values of such embeddings,and few attention has been paid to the estimation of such embeddings and their applications.In this paper we study the number of maximum genus embeddings for a graph and find an exponential lower bound for such numbers.Our results show that in general case,a simple connected graph has exponentially many distinct maximum genus embeddings.In particular,a connected cubie graph G of order n always has at least (√2)m+n+α/2 distinct maximum genus embeddings,where α and m denote,respectively,the number of inner vertices and odd compo-nents of an optimal tree T.What surprise us most is that such two extremal embeddings (i.e.,the maximum genus embeddings and the genus embeddings) are sometimes closely related with each other.In fact,as applications,we show that for a sufficient large natural number n,there are at least C2n/4 many genus embeddings for complete graph Kn with n=4,7,10 (mod12),where C is a constance depending on the Value of n of residue 12.These results improve the bounds obtained by Korzhik and Voss and the methods used here are much simpler and straight.
Why Does Bureaucratic Corruption Occur in the EU?
Brandt, Urs Steiner; Svendsen, Gert Tinggaard
2013-01-01
Why does bureaucratic corruption occur in the EU system? Several examples suggest that bureaucratic corruption exists and that the Commission’s anti-fraud agency, OLAF, is not a fully independent authority. We thus develop a novel interpretation of the principalsupervisor-agent model to cope...... with non-independent anti-fraud units. This model shows that corruption is likely to occur when the expected value to the client from bribing the agent is larger than the expected value to the principal of truth-telling by the supervisor. Overall, this analysis points to the risks of flawed incentives...
Factors affecting the depth of burns occurring in medical institutions.
Cho, Young Soon; Choi, Young Hwan; Yoon, Cheonjae; You, Je Sung
2015-05-01
Most cases of burns occurring in medical institutions are associated with activities involving heat. It is very difficult to detect these burns. To date, there are few reports on burns occurring in medical institutions. The purpose of this paper was to analyze the etiology of burns occurring in medical institutions and to elucidate the factors affecting burn depth. We conducted a retrospective analysis of the medical records of patients who visited our center from April 2008 to February 2013. This study enrolled all patients with burns occurring in the medical institution during or related to treatment. We excluded burn patients whose burns were not related to treatment (for example, we excluded patients with scalding burns that occurred in the hospital cafeteria and pediatric patients with hot water burns from the water purifier). However, patients with burns that occurred in the recovery room after general anesthesia were included. A total of 115 patients were enrolled in this study. The average patient age was 41.5 years, with more women than men (M:F=31:84). There were 29 cases (25.3%) of superficial burns (first-degree and superficial second-degree) and 86 cases (74.7%) of deep burns (deep second-degree and third-degree). Hot packs were the most common cause of burns (27 cases, 23.5%), followed by laser therapy, heating pads, and grounding pads, accounting for 15 cases each. There were 89 cases (77.4%) of contact burns and 26 cases (22.6%) of non-contact burns. The most common site of burns was the lower extremities (41 cases, 35.7%). The burn site and contact burns were both factors affecting burn depth. The rate of deep burns was higher in patients with contact burns than in those with non-contact burns (odds ratio 4.26) and was associated with lower body burns (odds ratio 2.85). In burns occurring in medical institutions, there is a high probability of a deep burn if it is a contact burn or occurs in the lower body. Therefore, safety guidelines are needed
Can spontaneous symmetry breaking occur in potential with one minimum?
Acus, A
2000-01-01
Spontaneous symmetry breaking occurs when the symmetry that a physical system possesses, is not preserved for the ground state of the system. Although the procedure of symmetry breaking is quite clear from the mathematical point of view, the physical interpretation of the phenomenon is worth to be better understood. In this note we present a simple and instructive example of the symmetry breaking in a mechanical system. It demonstrates that the spontaneous symmetry breaking can occur for the spatially extended solutions in a potential characterised by a single minimum.
Skin picking disorder with co-occurring body dysmorphic disorder
Grant, Jon E; Redden, Sarah A; Leppink, Eric W
2015-01-01
There is clinical overlap between skin picking disorder (SPD) and body dysmorphic disorder (BDD), but little research has examined clinical and cognitive correlates of the two disorders when they co-occur. Of 55 participants with SPD recruited for a neurocognitive study and two pharmacological...... studies, 16 (29.1%) had co-occurring BDD. SPD participants with and without BDD were compared to each other and to 40 healthy volunteers on measures of symptom severity, social functioning, and cognitive assessments using the Stop-signal task (assessing response impulsivity) and the Intra...
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Suligowski, Roman
2014-05-01
Probable Maximum Precipitation based upon the physical mechanisms of precipitation formation at the Kielce Upland. This estimation stems from meteorological analysis of extremely high precipitation events, which occurred in the area between 1961 and 2007 causing serious flooding from rivers that drain the entire Kielce Upland. Meteorological situation has been assessed drawing on the synoptic maps, baric topography charts, satellite and radar images as well as the results of meteorological observations derived from surface weather observation stations. Most significant elements of this research include the comparison between distinctive synoptic situations over Europe and subsequent determination of typical rainfall generating mechanism. This allows the author to identify the source areas of air masses responsible for extremely high precipitation at the Kielce Upland. Analysis of the meteorological situations showed, that the source areas for humid air masses which cause the largest rainfalls at the Kielce Upland are the area of northern Adriatic Sea and the north-eastern coast of the Black Sea. Flood hazard at the Kielce Upland catchments was triggered by daily precipitation of over 60 mm. The highest representative dew point temperature in source areas of warm air masses (these responsible for high precipitation at the Kielce Upland) exceeded 20 degrees Celsius with a maximum of 24.9 degrees Celsius while precipitable water amounted to 80 mm. The value of precipitable water is also used for computation of factors featuring the system, namely the mass transformation factor and the system effectiveness factor. The mass transformation factor is computed based on precipitable water in the feeding mass and precipitable water in the source area. The system effectiveness factor (as the indicator of the maximum inflow velocity and the maximum velocity in the zone of front or ascending currents, forced by orography) is computed from the quotient of precipitable water in
Carriquiry, José; Sanchez, Alberto; Leduc, Guillaume
2015-01-01
International audience; The oxygen and carbon isotopic compositions of benthic foraminiferal tests were measured on sedimentary sequences retrieved on the Magdalena Margin, off southern Baja California, Mexico. We reconstruct the hydrographic changes along the water column that occurred in the northeastern tropical Pacific since the Last Glacial Maximum (LGM) and compare those changes to the ones that occurred in the northwest Pacific (NWP, i.e., off Japan and Russia), in the northeast Pacifi...
Tripling the maximum imaging depth with third-harmonic generation microscopy.
Yildirim, Murat; Durr, Nicholas; Ben-Yakar, Adela
2015-09-01
The growing interest in performing high-resolution, deep-tissue imaging has galvanized the use of longer excitation wavelengths and three-photon-based techniques in nonlinear imaging modalities. This study presents a threefold improvement in maximum imaging depth of ex vivo porcine vocal folds using third-harmonic generation (THG) microscopy at 1552-nm excitation wavelength compared to two-photon microscopy (TPM) at 776-nm excitation wavelength. The experimental, analytical, and Monte Carlo simulation results reveal that THG improves the maximum imaging depth observed in TPM significantly from 140 to 420 μm in a highly scattered medium, reaching the expected theoretical imaging depth of seven extinction lengths. This value almost doubles the previously reported normalized imaging depths of 3.5 to 4.5 extinction lengths using three-photon-based imaging modalities. Since tissue absorption is substantial at the excitation wavelength of 1552 nm, this study assesses the tissue thermal damage during imaging by obtaining the depth-resolved temperature distribution through a numerical simulation incorporating an experimentally obtained thermal relaxation time (τ). By shuttering the laser for a period of 2τ, the numerical algorithm estimates a maximum temperature increase of ∼2°C at the maximum imaging depth of 420 μm. The paper demonstrates that THG imaging using 1552 nm as an illumination wavelength with effective thermal management proves to be a powerful deep imaging modality for highly scattering and absorbing tissues, such as scarred vocal folds.
Fang, W.; Quan, S. H.; Xie, C. J.; Tang, X. F.; Wang, L. L.; Huang, L.
2016-03-01
In this study, a direct-current/direct-current (DC/DC) converter with maximum power point tracking (MPPT) is developed to down-convert the high voltage DC output from a thermoelectric generator to the lower voltage required to charge batteries. To improve the tracking accuracy and speed of the converter, a novel MPPT control scheme characterized by an aggregated dichotomy and gradient (ADG) method is proposed. In the first stage, the dichotomy algorithm is used as a fast search method to find the approximate region of the maximum power point. The gradient method is then applied for rapid and accurate tracking of the maximum power point. To validate the proposed MPPT method, a test bench composed of an automobile exhaust thermoelectric generator was constructed for harvesting the automotive exhaust heat energy. Steady-state and transient tracking experiments under five different load conditions were carried out using a DC/DC converter with the proposed ADG and with three traditional methods. The experimental results show that the ADG method can track the maximum power within 140 ms with a 1.1% error rate when the engine operates at 3300 rpm@71 NM, which is superior to the performance of the single dichotomy method, the single gradient method and the perturbation and observation method from the viewpoint of improved tracking accuracy and speed.
Marcia Atauri Cardeli de Lucena
2013-12-01
Full Text Available Low availability of nitrogen (N is a factor that limits forage production. Pastures are mostly formed of grasses, which need large N amounts to sustain high yields. Additionally, the availability of this nutrient affects the persistence and quality of the forage produced. However, when applying fertilizers containing nitrogen up to 50% can be lost, making their use costly for farmers. N is a highly volatile gas, and urea, widely used in agriculture, contains 46% N. When urea comes in contact with moisture in the soil, hydrolysis occurs due to the precipitation of the enzyme urease, accelerating the transformation of urea into ammonia, which can be lost by volatilization. One of the techniques to increase the efficiency of using urea is application of nitrogen fertilizers along with urease inhibitors, to retard the breakdown of urea so that it becomes incorporated in the soil slowly. One of the inhibitors used is N-(n-butyl phosphate triamide (NBPT. This study aimed to assess the effect of nitrogen sources and doses on some productive characteristics of Áries grass (Panicum maximum, Jacq to find ways to improve the effectiveness of nitrogen application. The tests were performed at Centro Nutrição Animal e Pastagens – Instituto de Zootecnia in Nova Odessa, São Paulo, from March to August 2012. The experimental design was randomized blocks, with five replications in 2 x 3 factorial setup, where we studied the use of urea and urea containing urease inhibitor (NPBT, in pottery vessels (3.34 dm3. The treatments involved the following rates of N application: 0, 75.0 and 150.0 kg ha-1. The traits analyzed were dry biomass, leaf dry weight, dry weight of pseudostems, number of tillers, leaf area, nitrogen concentration and accumulation, and concentrations of chlorophyll and flavonoids. The data were analyzed using de GLM procedure of the SAS program. The results showed that nitrogen promoted considerable improvements in the plants, contributing to
CLEARANCE OF INDOMETHACIN OCCURS PREDOMINANTLY BY RENAL GLUCURONIDATION
MOOLENAAR, F; CRANCRINUS, S; VISSER, J; DEZEEUW, D; MEIJER, DKF
1992-01-01
In this report we describe the conditions of collection, storage and handling of urine samples, collected after oral dosing with indometacin in man, in order to maintain the integrity of the labile glucuronide formed. We found that the body clearance occurs predominantly by renal metabolism, due to
Disseminated Fusariosis Occurring in Two Patients Despite Posaconazole Prophylaxis▿
Bose, Prithviraj; Parekh, Hiral D.; Holter, Jennifer L.; Greenfield, Ronald A.
2011-01-01
Posaconazole is widely used for prophylaxis against invasive fungal infections in patients undergoing myeloablative therapy. Disseminated fusariosis is a serious invasive mold infection in such patients. Preclinical and clinical studies indicate activity of posaconazole against Fusarium. We describe two cases of disseminated fusariosis that occurred despite posaconazole prophylaxis.
Naturally occurring fatty acids: source, chemistry and uses
Natural occurring fatty acids are a large and complex class of compounds found in plants and animals. Fatty acids are abundant and of interest because of their renewability, biodegradability, biocompatibility, low cost, and fascinating chemistry. Of the many fatty acids, only 20-25 of them are widel...
Selective extraction of naturally occurring radioactive Ra2+
van Leeuwen, F.W.B.; Verboom, Willem; Reinhoudt, David
2005-01-01
Organic extractants play a significant role in the selective removal of radioactive cations from waste streams. Although, literature on the selective removal of man-made radioactive material such as Americium (Am) is widespread, the selective removal of naturally occurring radioactive material such
Resolving the Diaporthe species occurring on soybean in Croatia
Santos, J.M.; Vrandečić, K.; Ćosić, J.; Duvnjak, T.; Phillips, A.J.L.
2012-01-01
Diaporthe (anamorph = Phomopsis) species are plant pathogens and endophytes on a wide range of hosts including economically important crops. At least four Diaporthe taxa occur on soybean and they are responsible for serious diseases and significant yield losses. Although several studies have extensi
Integrative Priming Occurs Rapidly and Uncontrollably during Lexical Processing
Estes, Zachary; Jones, Lara L.
2009-01-01
Lexical priming, whereby a prime word facilitates recognition of a related target word (e.g., "nurse" [right arrrow] "doctor"), is typically attributed to association strength, semantic similarity, or compound familiarity. Here, the authors demonstrate a novel type of lexical priming that occurs among unassociated, dissimilar,…
Botteri's Sparrow (Peucaea botterii) Occurs in Northern Coahuila, Mexico
van Els, Paul; Canales-del-Castillo, Ricardo; Klicka, John
2011-01-01
Botteri’s Sparrow (Peucaea botterii) occurs widely in the shrub-grasslands of southern North America. We report a breeding population of the species in the Sierra de la Encantada of northern Coahuila, Mexico, ,80 km from the Big Bend area of Texas and .300 km from the nearest previously known breedi
Toward Improved Rotor-Only Axial Fans—Part II: Design Optimization for Maximum Efficiency
Sørensen, Dan Nørtoft; Thompson, M. C.; Sørensen, Jens Nørkær
2000-01-01
Numerical design optimization of the aerodynamic performance of axial fans is carried out, maximizing the efficiency in a designinterval of flow rates. Tip radius, number of blades, and angular velocity of the rotor are fixed, whereas the hub radius andspanwise distributions of chord length...
Power System Structural Vulnerability Assessment based on an Improved Maximum Flow Approach
Fang, Jiakun; Su, Chi; Chen, Zhe
2017-01-01
to identify the critical lines in a system. The proposed method consists of two major steps. First, the power network is modeled as a graph with edges (transmission lines, transformers, etc.) and nodes (buses, substations, etc.). The critical scenarios are identified by using the principal component analysis...
The characteristics of gas hydrates occurring in natural environment
Lu, H.; Moudrakovski, I.; Udachin, K.; Enright, G.; Ratcliffe, C.; Ripmeester, J.
2009-12-01
In the past few years, extensive analyses have been carried out for characterizing the natural gas hydrate samples from Cascadia, offshore Vancouver Island; Mallik, Mackenzie Delta; Mount Elbert, Alaska North Slope; Nankai Trough, offshore Japan; Japan Sea and offshore India. With the results obtained, it is possible to give a general picture of the characteristics of gas hydrates occurring in natural environment. Gas hydrate can occur in sediments of various types, from sands to clay, although it is preferentially enriched in sediments of certain types, for example coarse sands and fine volcanic ash. Most of the gas hydrates in sediments are invisible, occurring in the pores of the sediments, while some hydrates are visible, appearing as massive, nodular, planar, vein-like forms and occurring around the seafloor, in the fractures related to fault systems, or any other large spaces available in sediments. Although methane is the main component of most of the natural gas hydrates, C2 to C7 hydrocarbons have been recognized in hydrates, sometimes even in significant amounts. Shallow marine gas hydrates have been found generally to contain minor amounts of hydrogen sulfide. Gas hydrate samples with complex gas compositions have been found to have heterogeneous distributions in composition, which might reflect changes in the composition of the available gas in the surrounding environment. Depending on the gas compositions, the structure type of a natural gas hydrate can be structure I, II or H. For structure I methane hydrate, the large cages are almost fully occupied by methane molecules, while the small cages are only partly occupied. Methane hydrates occurring in different environments have been identified with almost the same crystallographic parameters.
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States). Biological, Environmental, and Climate Sciences Dept.
2014-12-02
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.
Going Against the Educational Grain: Can Learning Occur Backward?
Linn, Bernard S.; Zeppa, Robert
1982-01-01
Junior medical students were given answers to questions from which a final written examination was derived. Results seemed to indicate that grades on both the specific test and subsequent tests improve. (MLW)
Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K
2009-01-01
is developed for exact maximum likelihood estimation (MLE) using high-quality optimization software and using the saddle-point estimates as starting values. "MLE" is shown to outperform heuristic estimators proposed by other authors, both in terms of estimation accuracy and in terms of performance on real data....... The saddle-point approximation is an adequate replacement in most practical situations. The performance of normexp for assessing differential expression is improved by adding a small offset to the corrected intensities....
Development of an Intelligent Maximum Power Point Tracker Using an Advanced PV System Test Platform
Spataru, Sergiu; Amoiridis, Anastasios; Beres, Remus Narcis
2013-01-01
The performance of photovoltaic systems is often reduced by the presence of partial shadows. The system efficiency and availability can be improved by a maximum power point tracking algorithm that is able to detect partial shadow conditions and to optimize the power output. This work proposes an ...... photovoltaic inverter system test platform that is able to reproduce realistic partial shadow conditions, both in simulation and on hardware test system....
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Solar Forcing of Greenland Climate during the Last Glacial Maximum
Adolphi, Florian; Muscheler, Raimund; Svensson, Anders; Aldahan, Ala; Possnert, Göran; Beer, Juerg; Sjolte, Jesper; Björck, Svante
2014-05-01
The role of solar forcing in climate changes is a matter of continuous debate. Challenges arise from the short period of direct observations of total solar irradiance (TSI), which indicate minor TSI variations of approximately 1 ‰ over an 11-year cycle, and the limited understanding of possible feedback mechanisms. Opposed to this, there is evidence from paleoclimate records for a tight coupling of solar activity and regional climate (e.g., Bond et al. 2001, Martin-Puertas et al. 2012). One proposed mechanism to amplify the Sun's influence on climate involves the relatively large modulation of the solar UV output (Haigh et al. 2010). This alters the radiative balance in the stratosphere via ozone feedback processes and eventually propagates downwards causing changes in the tropospheric circulation (Inesson et al. 2011). The regional response to this forcing may, however, also depend on orbital forcing of the mean state of the atmosphere (Dietrich et al. 2012). Prior to direct observations cosmogenic radionuclides such as 10Be and 14C are the most reliable proxies of solar activity. Their atmospheric production rates depend on the flux of galactic cosmic rays into the atmosphere which in turn is modulated by the strength of the Earth's and the solar magnetic fields. However, archives of 10Be and 14C are additionally affected by changes of their respective geochemical environment. Owing to their fundamentally different geochemistry, a combined analysis of 10Be and 14C records can aid to isolate production rate variations more reliably and thus, lead to improved reconstructions of solar variability. Due to the absence of high-quality high-resolution data this approach has so far been limited to the Holocene. We will present the first solar activity reconstruction for the end of the last glacial (22.5 - 10 ka BP) based on the cosmogenic radionuclides 10Be and 14C. We will compare glacial solar activity variations to Holocene features through combined interpretation
Immunoregulation by naturally occurring and disease-associated autoantibodies
Nielsen, Claus H; Bendtzen, Klaus
2012-01-01
-receptors on antigen-presenting cells and thereby regulate T-cell activity. Knowledge of the influence of NAbs against cytokines on immune homeostasis is likely to have wide-ranging implications both in understanding pathogenesis and in treatment of many immunoinflammatory disorders, including a number of autoimmune......The role of naturally occurring autoantibodies (NAbs) in homeostasis and in disease manifestations is poorly understood. In the present chapter, we review how NAbs may interfere with the cytokine network and how NAbs, through formation of complement-activating immune complexes with soluble self......-antigens, may promote the uptake and presentation of self-molecules by antigen-presenting cells. Both naturally occurring and disease-associated autoantibodies against a variety of cytokines have been reported, including NAbs against interleukin (IL)-1α, IL-6, IL-8, IL-10, granulocyte-macrophage colony...
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Vadstrup, Casper; Schaltz, Erik; Chen, Min
2013-07-01
In a thermoelectric generator (TEG) system the DC/DC converter is under the control of a maximum power point tracker which ensures that the TEG system outputs the maximum possible power to the load. However, if the conditions, e.g., temperature, health, etc., of the TEG modules are different, each TEG module will not produce its maximum power. If each TEG module is controlled individually, each TEG module can be operated at its maximum power point and the TEG system output power will therefore be higher. In this work a power converter based on noninverting buck-boost converters capable of handling four TEG modules is presented. It is shown that, when each module in the TEG system is operated under individual maximum power point tracking, the system output power for this specific application can be increased by up to 8.4% relative to the situation when the modules are connected in series and 16.7% relative to the situation when the modules are connected in parallel.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Sensory-motor transformations for speech occur bilaterally.
Cogan, Gregory B; Thesen, Thomas; Carlson, Chad; Doyle, Werner; Devinsky, Orrin; Pesaran, Bijan
2014-03-01
Historically, the study of speech processing has emphasized a strong link between auditory perceptual input and motor production output. A kind of 'parity' is essential, as both perception- and production-based representations must form a unified interface to facilitate access to higher-order language processes such as syntax and semantics, believed to be computed in the dominant, typically left hemisphere. Although various theories have been proposed to unite perception and production, the underlying neural mechanisms are unclear. Early models of speech and language processing proposed that perceptual processing occurred in the left posterior superior temporal gyrus (Wernicke's area) and motor production processes occurred in the left inferior frontal gyrus (Broca's area). Sensory activity was proposed to link to production activity through connecting fibre tracts, forming the left lateralized speech sensory-motor system. Although recent evidence indicates that speech perception occurs bilaterally, prevailing models maintain that the speech sensory-motor system is left lateralized and facilitates the transformation from sensory-based auditory representations to motor-based production representations. However, evidence for the lateralized computation of sensory-motor speech transformations is indirect and primarily comes from stroke patients that have speech repetition deficits (conduction aphasia) and studies using covert speech and haemodynamic functional imaging. Whether the speech sensory-motor system is lateralized, like higher-order language processes, or bilateral, like speech perception, is controversial. Here we use direct neural recordings in subjects performing sensory-motor tasks involving overt speech production to show that sensory-motor transformations occur bilaterally. We demonstrate that electrodes over bilateral inferior frontal, inferior parietal, superior temporal, premotor and somatosensory cortices exhibit robust sensory-motor neural
A Stochastical Model for the Earthquake Occurences in Turkey
Gamze ÖZEL
2009-04-01
Full Text Available The fields of seismology and earthquake engineering deal with the studies for earthquake predictions, hazard assessments and the prevention of possible damage due to destructive earthquakes. Various kind of statistical models are used for the earthquake occurences. The most familiar model is a Poisson process for random series of events. However, the Poisson process is insufficient if the incorporation of more information about the seismic process is required. Recently, a compound Poisson process has been proposed an alternative to the Poisson process for the earthquake analysis. In this study, the compound Poisson process is introduced and the probabilities of earthquake numbers with magnitude M ³ 5.0 which will occur within 3 and 6 months; 5 and 10 years have been obtained for Turkey from the Poisson process. Then, it is shown that the aftershock sequences follow a geometric distribution. By this way, the probabilities of total number of aftershocks which will occur within one year and two years with magnitude M ³ 4.0 in Turkey are obtained from the compound Poisson process. Finally, the expected values of main shocks and total number of aftershocks which will occur within one year and two years are computed. The results show that the earthquake occurrence probability with magnitude M ³ 5.0 increases, whereas the probability of total number of aftershocks with magnitude M ³ 4.0 decreases in Turkey as the time increases. Besides, the total aftershock number with magnitude M ³ 4.0 , after a main shock with magnitude M ³ 5.0, equals to zero with the probability 0.48 within one year. The findings also indicate that approximately 130 main shocks with M ³ 5.0 , 28 aftershocks with magnitude M ³ 4.0 are expected within 30 years in Turkey.
Filtering, control and fault detection with randomly occurring incomplete information
Dong, Hongli; Gao, Huijun
2013-01-01
This book investigates the filtering, control and fault detection problems for several classes of nonlinear systems with randomly occurring incomplete information. It proposes new concepts, including RVNs, ROMDs, ROMTCDs, and ROQEs. The incomplete information under consideration primarily includes missing measurements, time-delays, sensor and actuator saturations, quantization effects and time-varying nonlinearities. The first part of this book focuses on the filtering, control and fault detection problems for several classes of nonlinear stochastic discrete-time systems and
Naturally occurring pentaoxygenated, hexaoxygenated and dimeric xanthones: a literature survey
V. Peres
1997-08-01
Full Text Available This review gives information on the chemical study of 71 pentaoxygenated, 11 hexaoxygenated and 9 dimeric and more complex xanthones naturally occurring in 7 families, 29 genus and 62 species of higher plants, and 11 described as fern and fungal metabolites. The value of these groups of substances in the connection with the pharmacological activity and the therapeutic use of some species is shown. The structural formulas of 23 isolated compounds and their distribution in the species studied are given.
The effects of naturally occurring impurities in rock salt
Alina-Mihaela Badescu; Alexandra Saftoiu
2014-09-01
In this paper we investigate the effect that naturally occurring impurities in salt mines have both on effective permittivity of the medium and on radio wave propagation at ∼200 MHz. The effective permittivity is determined based on the dielectric properties of salt and the characteristics of the main impurities. We conclude that at such frequencies the scattering is negligible compared to absorptions. The effect of trapped water in different forms is also evaluated.
Interaction of Siglec-4 with naturally occurring and synthetic glycoconjugates
Koliwer-Brandl, Hendrik
2011-01-01
The aim of this work has been to provide insights into the structure-function relationship of Siglec-4 binding naturally occurring sialic acids as well as synthetic sialic acid derivatives. Structural information of the Siglec binding site and its interactions with glycoconjugates were obtained from homology modeling of the sialic acid binding domain and molecular docking calculations with several sialosides. Furthermore, the interaction of chemically synthesized sialic acid derivatives with ...
Myelodysplastic Syndrome Occurring in a Patient with Gorlin Syndrome.
Mull, Jamie L; Madden, Lisa M; Bayliss, Susan J
2016-07-01
We report a case of myelodysplastic syndrome (MDS) occurring in an African American boy with Gorlin syndrome with a novel PTCH1 mutation. Before developing MDS, the patient had been treated with chemotherapy and radiation for a medulloblastoma. He received a bone marrow transplant for the MDS and eventually died of treatment complications. Secondary hematologic malignancies are a known complication of certain chemotherapeutics, although whether a patient with Gorlin syndrome has a greater propensity for the development of such malignancies is unclear.
Hepatocellular carcinoma occurring in a Crohn’s disease patient
Mitsuaki; Ishida; Shigeyuki; Naka; Hisanori; Shiomi; Tomoyuki; Tsujikawa; Akira; Andoh; Tamio; Nakahara; Yasuharu; Saito; Yoshi-hide; Fujiyama; Mikiko; Takikita-Suzuki; Fumiyoshi; Kojima; Machiko; Hotta; Tohru; Tani; Yoshimasa; Kurumi; Hidetoshi; Okabe
2010-01-01
We report a case of hepatocellular carcinoma (HCC) occurring in a patient with Crohn’s disease (CD) without chronic hepatitis or liver cirrhosis, and review the clinicopathological features of HCC in CD patients. A 37-year-old Japanese man with an 8-year history of CD and a medication history of azathioprine underwent resection of a liver tumor. The histopathology of the liver tumor was pseudoglandular type HCC. In the nonneoplastic liver, focal hepatocyte glycogenosis (FHG) was observed, however, there was...
Mohsen Taherbaneh; A. H. Rezaie; H. Ghafoorifard; Rahimi, K; M. B. Menhaj
2010-01-01
In applications with low-energy conversion efficiency, maximizing the output power improves the efficiency. The maximum output power of a solar panel depends on the environmental conditions and load profile. In this paper, a method based on simultaneous use of two fuzzy controllers is developed in order to maximize the generated output power of a solar panel in a photovoltaic system: fuzzy-based sun tracking and maximum power point tracking. The sun tracking is performed by changing the solar...
Naturally Occurring Anthraquinones: Chemistry and Therapeutic Potential in Autoimmune Diabetes
Shih-Chang Chien
2015-01-01
Full Text Available Anthraquinones are a class of aromatic compounds with a 9,10-dioxoanthracene core. So far, 79 naturally occurring anthraquinones have been identified which include emodin, physcion, cascarin, catenarin, and rhein. A large body of literature has demonstrated that the naturally occurring anthraquinones possess a broad spectrum of bioactivities, such as cathartic, anticancer, anti-inflammatory, antimicrobial, diuretic, vasorelaxing, and phytoestrogen activities, suggesting their possible clinical application in many diseases. Despite the advances that have been made in understanding the chemistry and biology of the anthraquinones in recent years, research into their mechanisms of action and therapeutic potential in autoimmune disorders is still at an early stage. In this paper, we briefly introduce the etiology of autoimmune diabetes, an autoimmune disorder that affects as many as 10 million worldwide, and the role of chemotaxis in autoimmune diabetes. We then outline the chemical structure and biological properties of the naturally occurring anthraquinones and their derivatives with an emphasis on recent findings about their immune regulation. We discuss the structure and activity relationship, mode of action, and therapeutic potential of the anthraquinones in autoimmune diabetes, including a new strategy for the use of the anthraquinones in autoimmune diabetes.
Evolution of virulence when transmission occurs before disease.
Osnas, Erik E; Dobson, Andrew P
2010-08-23
Most models of virulence evolution assume that transmission and virulence are constant during an infection. In many viral (HIV and influenza), bacterial (TB) and prion (BSE and CWD) systems, disease-induced mortality occurs long after the host becomes infectious. Therefore, we constructed a model with two infected classes that differ in transmission rate and virulence in order to understand how the evolutionarily stable strategy (ESS) depends on the relative difference in transmission and virulence between classes, on the transition rate between classes and on the recovery rate from the second class. We find that ESS virulence decreases when expressed early in the infection or when transmission occurs late in an infection. When virulence occurred relatively equally in each class and there was disease recovery, ESS virulence increased with increased transition rate. In contrast, ESS virulence first increased and then decreased with transition rate when there was little virulence early in the infection and a rapid recovery rate. This model predicts that ESS virulence is highly dependent on the timing of transmission and pathology after infection; thus, pathogen evolution may either increase or decrease virulence after emergence in a new host.
Percutaneous treatment of complications occurring during hemodialysis graft recanalization
Sofocleous, Constantinos T. E-mail: constant@pol.net; Schur, Israel; Koh, Elsie; Hinrichs, Clay; Cooper, Stanley G.; Welber, Adam; Brountzos, Elias; Kelekis, Dimitris
2003-09-01
Introduction/objective: To describe and evaluate percutaneous treatment methods of complications occurring during recanalization of thrombosed hemodialysis access grafts. Methods and materials: A retrospective review of 579 thrombosed hemodialysis access grafts revealed 48 complications occurring during urokinase thrombolysis (512) or mechanical thrombectomy (67). These include 12 venous or venous anastomotic ruptures not controlled by balloon tamponade, eight arterial emboli, 12 graft extravasations, seven small hematomas, four intragraft pseudointimal 'dissections', two incidents of pulmonary edema, one episode of intestinal angina, one procedural death, and one distant hematoma. Results: Twelve cases of post angioplasty ruptures were treated with uncovered stents of which 10 resulted in graft salvage allowing successful hemodialysis. All arterial emboli were retrieved by Fogarty or embolectomy balloons. The 10/12 graft extravasations were successfully treated by digital compression while the procedure was completed and the graft flow was restored. Dissections were treated with prolonged Percutaneous Trasluminal Angioplasty (PTA) balloon inflation. Overall technical success was 39/48 (81%). Kaplan-Meier Primary and secondary patency rates were 72 and 78% at 30, 62 and 73% at 90 and 36 and 67% at 180 days, respectively. Secondary patency rates remained over 50% at 1 year. There were no additional complications caused by these maneuvers. Discussions and conclusion: The majority of complications occurring during percutaneous thrombolysis/thrombectomy of thrombosed access grafts, can be treated at the same sitting allowing completion of the recanalization procedure and usage of the same access for hemodialysis.
Hospitalizations and hospital charges for co-occurring substance use and mental disorders.
Ding, Kele; Yang, Jingzhen; Cheng, Gang; Schiltz, Trisha; Summers, Karen M; Skinstad, Anne Helene
2011-06-01
Most published studies have examined co-occurring disorders among mental health patients. Our objective was to compare the length of stay and hospital charges between hospitalized patients with alcohol- or substance-related disorders with and without co-occurring disorders. We analyzed nationally representative hospital discharge data (Nationwide Inpatient Sample, 2003-2007) and examined factors associated with length of stay and hospital charges. Forty-four percent of patients who were hospitalized with alcohol- or substance-related disorders were diagnosed with co-occurring mental disorders, representing 979,421 such disorders nationwide between 2003 and 2007. Females, those of White race, those who paid with insurance, and those who stayed in large, rural, nonteaching, and Midwest region hospitals had a high prevalence of co-occurring disorders. Co-occurring disorders were associated with longer hospital stays, but there were mixed results with hospital charges per discharge. An increase in co-occurring disorders among hospitalized patients with substance-related disorder may be due to the improvement in diagnosis and clinical attention.
GUAN Hsin; WANG Bo; LU Pingping; XU Liang
2014-01-01
The identification of maximum road friction coefficient and optimal slip ratio is crucial to vehicle dynamics and control. However, it is always not easy to identify the maximum road friction coefficient with high robustness and good adaptability to various vehicle operating conditions. The existing investigations on robust identification of maximum road friction coefficient are unsatisfactory. In this paper, an identification approach based on road type recognition is proposed for the robust identification of maximum road friction coefficient and optimal slip ratio. The instantaneous road friction coefficient is estimated through the recursive least square with a forgetting factor method based on the single wheel model, and the estimated road friction coefficient and slip ratio are grouped in a set of samples in a small time interval before the current time, which are updated with time progressing. The current road type is recognized by comparing the samples of the estimated road friction coefficient with the standard road friction coefficient of each typical road, and the minimum statistical error is used as the recognition principle to improve identification robustness. Once the road type is recognized, the maximum road friction coefficient and optimal slip ratio are determined. The numerical simulation tests are conducted on two typical road friction conditions(single-friction and joint-friction) by using CarSim software. The test results show that there is little identification error between the identified maximum road friction coefficient and the pre-set value in CarSim. The proposed identification method has good robustness performance to external disturbances and good adaptability to various vehicle operating conditions and road variations, and the identification results can be used for the adjustment of vehicle active safety control strategies.
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Mass mortality of the vermetid gastropod Ceraesignum maximum
Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.
2016-09-01
Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study
Koike, Carla Cavalcante
2008-11-01
Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Remarks on the strong maximum principle for nonlocal operators
Jerome Coville
2008-05-01
Full Text Available In this note, we study the existence of a strong maximum principle for the nonlocal operator $$ mathcal{M}[u](x :=int_{G}J(gu(x*g^{-1}dmu(g - u(x, $$ where $G$ is a topological group acting continuously on a Hausdorff space $X$ and $u in C(X$. First we investigate the general situation and derive a pre-maximum principle. Then we restrict our analysis to the case of homogeneous spaces (i.e., $ X=G /H$. For such Hausdorff spaces, depending on the topology, we give a condition on $J$ such that a strong maximum principle holds for $mathcal{M}$. We also revisit the classical case of the convolution operator (i.e. $G=(mathbb{R}^n,+, X=mathbb{R}^n, dmu =dy$.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Maximum length scale in density based topology optimization
Lazarov, Boyan Stefanov; Wang, Fengwen
2017-01-01
The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...
On the Effect of Mortgages of Maximum Amount
YangZongping
2005-01-01
Since the enactment of the PRC Guarantee Law, mortgages of maximum amount has won wide application in a variety of business occupations and particularly in banking. Compared with the rich content of the 21clause statute on mortgages of maximum amount in Japan's Civil Law, the Chinese law has only four principled clauses. Its lack of operability plus its legislative gaps and defects has a severe impact on the positive effectiveness of the law. The core issue is the question of effectiveness. Because the principles stipulated in the Law run counter to the diversity of its actual practices,
A Maximum Entropy Method for a Robust Portfolio Problem
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.