Minimum Length - Maximum Velocity
Panes, Boris
2011-01-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.
Minimum length-maximum velocity
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
无
2011-01-01
[Objective] The research aimed to analyze temporal and spatial variation characteristics of temperature in Shangqiu City during 1961-2010.[Method] Based on temperature data in eight meteorological stations of Shangqiu during 1961-2010,by using trend analysis method,the temporal and spatial evolution characteristics of annual average temperature,annual average maximum and minimum temperatures,annual extreme maximum and minimum temperatures,daily range of annual average temperature in Shangqiu City were analy...
A new solar signal: Average maximum sunspot magnetic fields independent of activity cycle
Livingston, William
2016-01-01
Over the past five years, 2010-2015, we have observed, in the near infrared (IR), the maximum magnetic field strengths for 4145 sunspot umbrae. Herein we distinguish field strengths from field flux. (Most solar magnetographs measure flux). Maximum field strength in umbrae is co-spatial with the position of umbral minimum brightness (Norton and Gilman, 2004). We measure field strength by the Zeeman splitting of the Fe 15648.5 A spectral line. We show that in the IR no cycle dependence on average maximum field strength (2050 G) has been found +/- 20 Gauss. A similar analysis of 17,450 spots observed by the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory reveal the same cycle independence +/- 0.18 G., or a variance of 0.01%. This is found not to change over the ongoing 2010-2015 minimum to maximum cycle. Conclude the average maximum umbral fields on the Sun are constant with time.
An application of Hamiltonian neurodynamics using Pontryagin's Maximum (Minimum) Principle.
Koshizen, T; Fulcher, J
1995-12-01
Classical optimal control methods, notably Pontryagin's Maximum (Minimum) Principle (PMP) can be employed, together with Hamiltonians, to determine optimal system weights in Artificial Neural dynamical systems. A new learning rule based on weight equations derived using PMP is shown to be suitable for both discrete- and continuous-time systems, and moreover, can also be applied to feedback networks. Preliminary testing shows that this PMP learning rule compares favorably with Standard BackPropagations (SBP) on the XOR problem.
Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs
Italiano, Giuseppe F
2010-01-01
In this paper we study minimum cut and maximum flow problems on planar graphs, both in static and in dynamic settings. First, we present an algorithm that given an undirected planar graph computes the minimum cut between any two given vertices in O(n log log n) time. Second, we show how to achieve the same O(n log log n) bound for the problem of computing maximum flows in undirected planar graphs. To the best of our knowledge, these are the first algorithms for those two problems that break the O(n log n) barrier, which has been standing for more than 25 years. Third, we present a fully dynamic algorithm that is able to maintain information about minimum cuts and maximum flows in a plane graph (i.e., a planar graph with a fixed embedding): our algorithm is able to insert edges, delete edges and answer min-cut and max-flow queries between any pair of vertices in O(n^(2/3) log^3 n) time per operation. This result is based on a new dynamic shortest path algorithm for planar graphs which may be of independent int...
Maximum, minimum, and optimal mutation rates in dynamic environments
Ancliff, Mark; Park, Jeong-Man
2009-12-01
We analyze the dynamics of the parallel mutation-selection quasispecies model with a changing environment. For an environment with the sharp-peak fitness function in which the most fit sequence changes by k spin flips every period T , we find analytical expressions for the minimum and maximum mutation rates for which a quasispecies can survive, valid in the limit of large sequence size. We find an asymptotic solution in which the quasispecies population changes periodically according to the periodic environmental change. In this state we compute the mutation rate that gives the optimal mean fitness over a period. We find that the optimal mutation rate per genome, k/T , is independent of genome size, a relationship which is observed across broad groups of real organisms.
Quantum state discrimination using the minimum average number of copies
Slussarenko, Sergei; Li, Jun-Gang; Campbell, Nicholas; Wiseman, Howard M; Pryde, Geoff J
2016-01-01
In the task of discriminating between nonorthogonal quantum states from multiple copies, the key parameters are the error probability and the resources (number of copies) used. Previous studies have considered the task of minimizing the average error probability for fixed resources. Here we consider minimizing the average resources for a fixed admissible error probability. We derive a detection scheme optimized for the latter task, and experimentally test it, along with schemes previously considered for the former task. We show that, for our new task, our new scheme outperforms all previously considered schemes.
Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.
1977-02-01
maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to
CO2 maximum in the oxygen minimum zone (OMZ)
Paulmier, A.; Ruiz-Pino, D.; Garçon, V.
2011-02-01
Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O22225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the oxycline, by a "carbon excess" induced by a specific remineralization. Indeed, a possible co-existence of bacterial heterotrophic and autotrophic processes usually occurring at different depths could
CO2 maximum in the oxygen minimum zone (OMZ
V. Garçon
2011-02-01
Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence
MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
MARSpline model for lead seven-day maximum and minimum air temperature prediction in Chennai, India
K Ramesh; R Anitha
2014-06-01
In this study, a Multivariate Adaptive Regression Spline (MARS) based lead seven days minimum and maximum surface air temperature prediction system is modelled for station Chennai, India. To emphasize the effectiveness of the proposed system, comparison is made with the models created using statistical learning technique Support Vector Machine Regression (SVMr). The analysis highlights that prediction accuracy of MARS models for minimum temperature forecast are promising for short-term forecast (lead days 1 to 3) with mean absolute error (MAE) less than 1°C and the prediction efficiency and skill degrades in medium term forecast (lead days 4 to 7) with slightly above 1°C. The MAE of maximum temperature is little higher than minimum temperature forecast varying from 0.87°C for day-one to 1.27°C for lag day-seven with MARS approach. The statistical error analysis emphasizes that MARS models perform well with an average 0.2°C of reduction in MAE over SVMr models for all ahead seven days and provide significant guidance for the prediction of temperature event. The study also suggests that the correlation between the atmospheric parameters used as predictors and the temperature event decreases as the lag increases with both approaches.
Data-aided efficient synchronization for UWB signals based on minimum average error probability
SUN Qiang; L(U) Tie-jun
2008-01-01
One of the biggest challenges in ultra-wideband(UWB) radio is the accurate timing acquisition for the receiver.In this article, we develop a novel data-aided synchronizationalgorithm for pulses amplitude modulation (PAM) UWB systems.Pilot and information symbols are transmitted simultaneously byan orthogonal code division multiplexing (OCDM) scheme. Inthe receiver, an algorithm based on the minimum average errorprobability (MAEP) of coherent detector is applied to estimatethe timing offset. The multipath interference (MI) problem fortiming offset estimation is considered. The mean-square-error(MSE) and the bit-error-rate(BER) performances of our proposedscheme are simulated. The results show that our algorithmoutperforms the algorithm based on the maximum correlatoroutput (MCO) in multipath channels.
THE MAXIMUM AND MINIMUM DEGREES OF RANDOM BIPARTITE MULTIGRAPHS
Chen Ailian; Zhang Fuji; Li Hao
2011-01-01
In this paper the authors generalize the classic random bipartite graph model, and define a model of the random bipartite multigraphs as follows: let m=m(n) be a positive integer-valued function on n and (n, m; {pk}) the probability space consisting of all the labeled bipartite multigraphs with two vertex sets A={a1,a2,...,an} and B= {b1, b2,..., bm}, in which the numbers taibj of the edges between any two vertices ai∈A and bj∈B are identically distributed independent random variables with distribution P{taibj}=k}=pk, k=0, 1, 2,..., where pk≥0 and ∑ pk=1. They obtain that Xc,d,A, the number of vertices in A with degree between c and d of Gn,m∈ (n, m;{Pk}) has asymptotically Poisson distribution, and answer the following two questions about the space (n,m; {pk}) with {pk} having geometric distribution, binomial distribution and Poisson distribution, respectively. Under which condition for {Pk} can there be a function D(n) such that almost every random multigraph Gnm∈ (n, m; {pk}) has maximum degree D(n) in A? under which condition for {pk} has almost every multigraph Gn,m∈ (n,m;{pk}) a unique vertex of maximum degree in A?
Minimum disturbance rewards with maximum possible classical correlations
Pande, Varad R., E-mail: varad_pande@yahoo.in [Department of Physics, Indian Institute of Science Education and Research Pune, 411008 (India); Shaji, Anil [School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, 695016 (India)
2017-07-12
Weak measurements done on a subsystem of a bipartite system having both classical and nonClassical correlations between its components can potentially reveal information about the other subsystem with minimal disturbance to the overall state. We use weak quantum discord and the fidelity between the initial bipartite state and the state after measurement to construct a cost function that accounts for both the amount of information revealed about the other system as well as the disturbance to the overall state. We investigate the behaviour of the cost function for families of two qubit states and show that there is an optimal choice that can be made for the strength of the weak measurement. - Highlights: • Weak measurements done on one part of a bipartite system with controlled strength. • Weak quantum discord & fidelity used to quantify all correlations and disturbance. • Cost function to probe the tradeoff between extracted correlations and disturbance. • Optimal measurement strength for maximum extraction of classical correlations.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
无
2008-01-01
Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public's attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A'nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578-1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.
Jacoby; GORDON
2008-01-01
Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public’s attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A’nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578―1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.
Minimum redundancy maximum relevance feature selection approach for temporal gene expression data.
Radovic, Milos; Ghalwash, Mohamed; Filipovic, Nenad; Obradovic, Zoran
2017-01-03
Feature selection, aiming to identify a subset of features among a possibly large set of features that are relevant for predicting a response, is an important preprocessing step in machine learning. In gene expression studies this is not a trivial task for several reasons, including potential temporal character of data. However, most feature selection approaches developed for microarray data cannot handle multivariate temporal data without previous data flattening, which results in loss of temporal information. We propose a temporal minimum redundancy - maximum relevance (TMRMR) feature selection approach, which is able to handle multivariate temporal data without previous data flattening. In the proposed approach we compute relevance of a gene by averaging F-statistic values calculated across individual time steps, and we compute redundancy between genes by using a dynamical time warping approach. The proposed method is evaluated on three temporal gene expression datasets from human viral challenge studies. Obtained results show that the proposed method outperforms alternatives widely used in gene expression studies. In particular, the proposed method achieved improvement in accuracy in 34 out of 54 experiments, while the other methods outperformed it in no more than 4 experiments. We developed a filter-based feature selection method for temporal gene expression data based on maximum relevance and minimum redundancy criteria. The proposed method incorporates temporal information by combining relevance, which is calculated as an average F-statistic value across different time steps, with redundancy, which is calculated by employing dynamical time warping approach. As evident in our experiments, incorporating the temporal information into the feature selection process leads to selection of more discriminative features.
2010-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum and minimum allowable operating...
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Changes in atmospheric circulation between solar maximum and minimum conditions in winter and summer
Lee, Jae Nyung
2008-10-01
Statistically significant climate responses to the solar variability are found in Northern Annular Mode (NAM) and in the tropical circulation. This study is based on the statistical analysis of numerical simulations with ModelE version of the chemistry coupled Goddard Institute for Space Studies (GISS) general circulation model (GCM) and National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. The low frequency large scale variability of the winter and summer circulation is described by the NAM, the leading Empirical Orthogonal Function (EOF) of geopotential heights. The newly defined seasonal annular modes and its dynamical significance in the stratosphere and troposphere in the GISS ModelE is shown and compared with those in the NCEP/NCAR reanalysis. In the stratosphere, the summer NAM obtained from NCEP/NCAR reanalysis as well as from the ModelE simulations has the same sign throughout the northern hemisphere, but shows greater variability at low latitudes. The patterns in both analyses are consistent with the interpretation that low NAM conditions represent an enhancement of the seasonal difference between the summer and the annual averages of geopotential height, temperature and velocity distributions, while the reverse holds for high NAM conditions. Composite analysis of high and low NAM cases in both the model and observation suggests that the summer stratosphere is more "summer-like" when the solar activity is near a maximum. This means that the zonal easterly wind flow is stronger and the temperature is higher than normal. Thus increased irradiance favors a low summer NAM. A quantitative comparison of the anti-correlation between the NAM and the solar forcing is presented in the model and in the observation, both of which show lower/higher NAM index in solar maximum/minimum conditions. The summer NAM in the troposphere obtained from NCEP/NCAR reanalysis has a dipolar zonal structure with maximum
Climate change uncertainty for daily minimum and maximum temperatures: a model inter-comparison
Lobell, D; Bonfils, C; Duffy, P
2006-11-09
Several impacts of climate change may depend more on changes in mean daily minimum (T{sub min}) or maximum (T{sub max}) temperatures than daily averages. To evaluate uncertainties in these variables, we compared projections of T{sub min} and T{sub max} changes by 2046-2065 for 12 climate models under an A2 emission scenario. Average modeled changes in T{sub max} were slightly lower in most locations than T{sub min}, consistent with historical trends exhibiting a reduction in diurnal temperature ranges. However, while average changes in T{sub min} and T{sub max} were similar, the inter-model variability of T{sub min} and T{sub max} projections exhibited substantial differences. For example, inter-model standard deviations of June-August T{sub max} changes were more than 50% greater than for T{sub min} throughout much of North America, Europe, and Asia. Model differences in cloud changes, which exert relatively greater influence on T{sub max} during summer and T{sub min} during winter, were identified as the main source of uncertainty disparities. These results highlight the importance of considering separately projections for T{sub max} and T{sub min} when assessing climate change impacts, even in cases where average projected changes are similar. In addition, impacts that are most sensitive to summertime T{sub min} or wintertime T{sub max} may be more predictable than suggested by analyses using only projections of daily average temperatures.
Wilson, Robert M.
2013-01-01
Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.
Y. Haseli
2016-05-01
Full Text Available The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov’s engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Observed Abrupt Changes in Minimum and Maximum Temperatures in Jordan in the 20th Century
Mohammad M. samdi
2006-01-01
Full Text Available This study examines changes in annual and seasonal mean (minimum and maximum temperatures variations in Jordan during the 20th century. The analyses focus on the time series records at the Amman Airport Meteorological (AAM station. The occurrence of abrupt changes and trends were examined using cumulative sum charts (CUSUM and bootstrapping and the Mann-Kendall rank test. Statistically significant abrupt changes and trends have been detected. Major change points in the mean minimum (night-time and mean maximum (day-time temperatures occurred in 1957 and 1967, respectively. A minor change point in the annual mean maximum temperature also occurred in 1954, which is essential agreement with the detected change in minimum temperature. The analysis showed a significant warming trend after the years 1957 and 1967 for the minimum and maximum temperatures, respectively. The analysis of maximum temperatures shows a significant warming trend after the year 1967 for the summer season with a rate of temperature increase of 0.038°C/year. The analysis of minimum temperatures shows a significant warming trend after the year 1957 for all seasons. Temperature and rainfall data from other stations in the country have been considered and showed similar changes.
Performance of Velicer's Minimum Average Partial Factor Retention Method with Categorical Variables
Garrido, Luis E.; Abad, Francisco J.; Ponsoda, Vicente
2011-01-01
Despite strong evidence supporting the use of Velicer's minimum average partial (MAP) method to establish the dimensionality of continuous variables, little is known about its performance with categorical data. Seeking to fill this void, the current study takes an in-depth look at the performance of the MAP procedure in the presence of…
Camarrone, Flavio; Ivanova, Anna; Decoster, Wivine; de Jong, Felix; van Hulle, Marc M
2015-01-01
To examine whether the minimum as well as the maximum voice intensity (i.e. sound pressure level, SPL) curves of a voice range profile (VRP) are required when discovering different voice groups based on a clustering analysis. In this approach, no a priori labeling of voice types is used. VRPs of 194 (84 male and 110 female) professional singers were registered and processed. Cluster analysis was performed with the use of features related to (1) both the maximum and minimum SPL curves and (2) the maximum SPL curve only. Features related to the maximum as well as the minimum SPL curves showed three clusters in both male and female voices. These clusters, or voice groups, are based on voice types with similar VRP features. However, when using features related only to the maximum SPL curve, the clusters became less obvious. Features related to the maximum and minimum SPL curves of a VRP are both needed in order to identify the three voice clusters. © 2016 S. Karger AG, Basel.
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)
2012-12-15
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Pan, Sudip; Solà, Miquel; Chattaraj, Pratim K
2013-02-28
Hardness and electrophilicity values for several molecules involved in different chemical reactions are calculated at various levels of theory and by using different basis sets. Effects of these aspects as well as different approximations to the calculation of those values vis-à-vis the validity of the maximum hardness and minimum electrophilicity principles are analyzed in the cases of some representative reactions. Among 101 studied exothermic reactions, 61.4% and 69.3% of the reactions are found to obey the maximum hardness and minimum electrophilicity principles, respectively, when hardness of products and reactants is expressed in terms of their geometric means. However, when we use arithmetic mean, the percentage reduces to some extent. When we express the hardness in terms of scaled hardness, the percentage obeying maximum hardness principle improves. We have observed that maximum hardness principle is more likely to fail in the cases of very hard species like F(-), H(2), CH(4), N(2), and OH appearing in the reactant side and in most cases of the association reactions. Most of the association reactions obey the minimum electrophilicity principle nicely. The best results (69.3%) for the maximum hardness and minimum electrophilicity principles reject the 50% null hypothesis at the 2% level of significance.
Variability of maximum and mean average temperature across Libya (1945-2009)
Ageena, I.; Macdonald, N.; Morse, A. P.
2014-08-01
Spatial and temporal variability in daily maximum and mean average daily temperature, monthly maximum and mean average monthly temperature for nine coastal stations during the period 1956-2009 (54 years), and annual maximum and mean average temperature for coastal and inland stations for the period 1945-2009 (65 years) across Libya are analysed. During the period 1945-2009, significant increases in maximum temperature (0.017 °C/year) and mean average temperature (0.021 °C/year) are identified at most stations. Significantly, warming in annual maximum temperature (0.038 °C/year) and mean average annual temperatures (0.049 °C/year) are observed at almost all study stations during the last 32 years (1978-2009). The results show that Libya has witnessed a significant warming since the middle of the twentieth century, which will have a considerable impact on societies and the ecology of the North Africa region, if increases continue at current rates.
3D facial expression recognition using maximum relevance minimum redundancy geometrical features
Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce
2012-12-01
In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.
Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds
Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)
2016-03-15
The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.
The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.
Macpherson, David A.; Even, William E.
The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…
How do GCMs represent daily maximum and minimum temperatures in La Plata Basin?
Bettolli, M. L.; Penalba, O. C.; Krieger, P. A.
2013-05-01
This work focuses on southern La Plata Basin region which is one of the most important agriculture and hydropower producing regions worldwide. Extreme climate events such as cold and heat waves and frost events have a significant socio-economic impact. It is a big challenge for global climate models (GCMs) to simulate regional patterns, temporal variations and distribution of temperature in a daily basis. Taking into account the present and future relevance of the region for the economy of the countries involved, it is very important to analyze maximum and minimum temperatures for model evaluation and development. This kind of study is aslo the basis for a great deal of the statistical downscaling methods in a climate change context. The aim of this study is to analyze the ability of the GCMs to reproduce the observed daily maximum and minimum temperatures in the southern La Plata Basin region. To this end, daily fields of maximum and minimum temperatures from a set of 15 GCMs were used. The outputs corresponding to the historical experiment for the reference period 1979-1999 were obtained from the WCRP CMIP5 (World Climate Research Programme Coupled Model Intercomparison Project Phase 5). In order to compare daily temperature values in the southern La Plata Basin region as generated by GCMs to those derived from observations, daily maximum and minimum temperatures were used from the gridded dataset generated by the Claris LPB Project ("A Europe-South America Network for Climate Change Assessment and Impact Studies in La Plata Basin"). Additionally, reference station data was included in the study. The analysis was focused on austral winter (June, July, August) and summer (December, January, February). The study was carried out by analyzing the performance of the 15 GCMs , as well as their ensemble mean, in simulating the probability distribution function (pdf) of maximum and minimum temperatures which include mean values, variability, skewness, et c, and regional
Computational complexity of some maximum average weight problems with precedence constraints
Faigle, Ulrich; Kern, Walter
1994-01-01
Maximum average weight ideal problems in ordered sets arise from modeling variants of the investment problem and, in particular, learning problems in the context of concepts with tree-structured attributes in artificial intelligence. Similarly, trying to construct tests with high reliability leads t
Uncertainties in transient projections of maximum and minimum flows over the United States
Giuntoli, Ignazio; Villarini, Gabriele; Prudhomme, Christel; Hannah, David M.
2016-04-01
Global multi-model ensemble experiments provide a valuable basis for the examination of potential future changes in runoff. However, these projections suffer from uncertainties that originate from different sources at different levels in the modelling chain. We present the partitioning of uncertainty into four distinct sources of projections of decadally-averaged annual maximum (AMax) and minimum (AMin) flows over the USA. More specifically, we quantify the relative contribution of the uncertainties arising from internal variability, global impact models (GIMs), global climate models (GCMs), and representative concentration pathways (RCPs). We use a set of nine state-of-the-art GIMs driven by five CMIP5 GCMs under four RCPs from the ISI-MIP multi-model ensemble. We examine the temporal changes in the relative contribution of each source of uncertainty over the course of the 21st century. Results show that GCMs and GIMs are responsible for the majority of uncertainty over most of the study area, followed by internal variability and RCPs. Proportions vary regionally and depend on the end of the runoff spectrum (AMax, AMin) considered. In particular, for AMax, large fractions of uncertainty are attributable to GCMs throughout the century with the GIMs increasing their share especially in mountainous and cold areas. For Amin, the contribution of GIMs to uncertainty increases with time, becoming the dominant source over most of the country by the end of the 21st century. Importantly, compared to the other sources, the RCPs contribution to uncertainty is negligible generally (for AMin especially). This finding indicates that the effects of different emission scenarios are barely noticeable in hydrological impact studies, while GIMs and GCMs make up most of the amplitude of the ensemble spread (uncertainty).
Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model
Yang, Yuefang; Gan, Chunhui; Shen, Tingting
2017-05-01
In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.
Govatski, J. A.; da Luz, M. G. E.; Koehler, M.
2015-01-01
We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.
Trends in Mean Annual Minimum and Maximum Near Surface Temperature in Nairobi City, Kenya
George Lukoye Makokha
2010-01-01
Full Text Available This paper examines the long-term urban modification of mean annual conditions of near surface temperature in Nairobi City. Data from four weather stations situated in Nairobi were collected from the Kenya Meteorological Department for the period from 1966 to 1999 inclusive. The data included mean annual maximum and minimum temperatures, and was first subjected to homogeneity test before analysis. Both linear regression and Mann-Kendall rank test were used to discern the mean annual trends. Results show that the change of temperature over the thirty-four years study period is higher for minimum temperature than maximum temperature. The warming trends began earlier and are more significant at the urban stations than is the case at the sub-urban stations, an indication of the spread of urbanisation from the built-up Central Business District (CBD to the suburbs. The established significant warming trends in minimum temperature, which are likely to reach higher proportions in future, pose serious challenges on climate and urban planning of the city. In particular the effect of increased minimum temperature on human physiological comfort, building and urban design, wind circulation and air pollution needs to be incorporated in future urban planning programmes of the city.
Weak minimum aberration and maximum number of clear two-factor interactions in 2
YANG; Guijun
2005-01-01
[1]Wu, C. F. J., Chen, Y., A graph-aided method for planning two-level experiments when certain interactions are important, Technometrics, 1992, 34: 162-175.[2]Fries, A., Hunter, W, G., Minimum aberration 2к-p designs, Technometrics, 1980, 22: 601-608.[3]Chen, H., Hedayat, A. S., 2n-l designs with weak minimum aberration, Ann. Statist., 1996, 24: 2536-2548.[4]Chen, J., Some results on 2n-к fractional factorial designs and search for minimum aberration designs, Ann.Statist., 1992, 20: 2124-2141.[5]Chen, J., Intelligent search for 213-6 and 214-7 minimum aberration designs, Statist. Sinica, 1998, 8: 1265-1270.[6]Chen, J., Sun, D. X., Wu, C. F. J., A catalogue of two-level and three-level fractional factorial designs with small runs, Internat. Statist. Rev., 1993, 61: 131-145.[7]Chen, J., Wu, C. F. J., Some results on 2n-к fractional factorial designs with minimum aberration or optimal moments, Ann. Statist., 1991, 19: 1028-1041.[8]Cheng, C. S., Mukerjee, R., Regular fractional factorial designs with minimum aberration and maximum estimation capacity, Ann. Statist., 1998, 26: 2289-2300.[9]Cheng, C. S., Steinberg, D. M., Sun, D. X., Minimum aberration and model robustness for two-level fractional factorial designs, J. Roy. Statist. Soc. Ser. B, 1999, 61: 85-93.[10]Draper, N. R., Lin, D. K. J., Capacity consideration for two-level fractional factorial designs, J. Statist. Plann.Inference, 1990, 24: 25-35.[11]Fang, K. T., Mukerjee, R., A connection between uniformity and aberration in regular fractions of two-level factorial, Biometrika, 2000, 87: 193-198.[12]Tang, B., Wu, C. F. J., Characterization of minimum aberration 2n-к designs in terms of their complementary designs, Ann. Statist., 1996, 24: 2549-2559.[13]Chen, H., Hedayat, A. S., 2n-m designs with resolution Ⅲ or Ⅳ containing clear two-factor interactions, J.Statist. Plann. Inference, 1998, 75: 147-158.[14]Tang, B., Ma, F., Ingram, D., Wang, H., Bounds on the maximum numbers of clear two factor
Analytical expressions for maximum wind turbine average power in a Rayleigh wind regime
Carlin, P.W.
1996-12-01
Average or expectation values for annual power of a wind turbine in a Rayleigh wind regime are calculated and plotted as a function of cut-out wind speed. This wind speed is expressed in multiples of the annual average wind speed at the turbine installation site. To provide a common basis for comparison of all real and imagined turbines, the Rayleigh-Betz wind machine is postulated. This machine is an ideal wind machine operating with the ideal Betz power coefficient of 0.593 in a Rayleigh probability wind regime. All other average annual powers are expressed in fractions of that power. Cases considered include: (1) an ideal machine with finite power and finite cutout speed, (2) real machines operating in variable speed mode at their maximum power coefficient, and (3) real machines operating at constant speed.
SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume
Gong, Y; Yu, J; Xiao, Y [Thomas Jefferson University Hospital, Philadelphia, PA (United States)
2015-06-15
Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant.
The ancient Egyptian civilization: maximum and minimum in coincidence with solar activity
Shaltout, M.
It is proved from the last 22 years observations of the total solar irradiance (TSI) from space by artificial satellites, that TSI shows negative correlation with the solar activity (sunspots, flares, and 10.7cm Radio emissions) from day to day, but shows positive correlations with the same activity from year to year (on the base of the annual average for each of them). Also, the solar constant, which estimated fromth ground stations for beam solar radiations observations during the 20 century indicate coincidence with the phases of the 11- year cycles. It is known from sunspot observations (250 years) , and from C14 analysis, that there are another long-term cycles for the solar activity larger than 11-year cycle. The variability of the total solar irradiance affecting on the climate, and the Nile flooding, where there is a periodicities in the Nile flooding similar to that of solar activity, from the analysis of about 1300 years of the Nile level observations atth Cairo. The secular variations of the Nile levels, regularly measured from the 7 toth 15 century A.D., clearly correlate with the solar variations, which suggests evidence for solar influence on the climatic changes in the East African tropics The civilization of the ancient Egyptian was highly correlated with the Nile flooding , where the river Nile was and still yet, the source of the life in the Valley and Delta inside high dry desert area. The study depends on long -time historical data for Carbon 14 (more than five thousands years), and chronical scanning for all the elements of the ancient Egyptian civilization starting from the firs t dynasty to the twenty six dynasty. The result shows coincidence between the ancient Egyptian civilization and solar activity. For example, the period of pyramids building, which is one of the Brilliant periods, is corresponding to maximum solar activity, where the periods of occupation of Egypt by Foreign Peoples corresponding to minimum solar activity. The decline
Sadjadi, Firooz A; Mahalanobis, Abhijit
2006-05-01
We report the development of a technique for adaptive selection of polarization ellipse tilt and ellipticity angles such that the target separation from clutter is maximized. From the radar scattering matrix [S] and its complex components, in phase and quadrature phase, the elements of the Mueller matrix are obtained. Then, by means of polarization synthesis, the radar cross section of the radar scatters are obtained at different transmitting and receiving polarization states. By designing a maximum average correlation height filter, we derive a target versus clutter distance measure as a function of four transmit and receive polarization state angles. The results of applying this method on real synthetic aperture radar imagery indicate a set of four transmit and receive angles that lead to maximum target versus clutter discrimination. These optimum angles are different for different targets. Hence, by adaptive control of the state of polarization of polarimetric radar, one can noticeably improve the discrimination of targets from clutter.
Signal window minimum average error algorithm for multi-phase level computer-generated holograms
El Bouz, Marwa; Heggarty, Kevin
2000-06-01
This paper extends the article "Signal window minimum average error algorithm for computer-generated holograms" (JOSA A 1998) to multi-phase level CGHs. We show that using the same rule for calculating the complex error diffusion weights, iterative-algorithm-like low-error signal windows can be obtained for any window shape or position (on- or off-axis) and any number of CGH phase levels. Important algorithm parameters such as amplitude normalisation level and phase freedom diffusers are described and investigated to optimize the algorithm. We show that, combined with a suitable diffuser, the algorithm makes feasible the calculation of high performance CGHs far larger than currently practical with iterative algorithms yet now realisable with modern fabrication techniques. Preliminary experimental optical reconstructions are presented.
Wu, Yating; Kuang, Bin; Wang, Tao; Zhang, Qianwu; Wang, Min
2015-12-01
This paper presents a minimum cost maximum flow (MCMF) based upstream bandwidth allocation algorithm, which supports differentiated QoS for orthogonal frequency division multiple access passive optical networks (OFDMA-PONs). We define a utility function as the metric to characterize the satisfaction degree of an ONU on the obtained bandwidth. The bandwidth allocation problem is then formulated as maximizing the sum of the weighted total utility functions of all ONUs. By constructing a flow network graph, we obtain the optimized bandwidth allocation using the MCMF algorithm. Simulation results show that the proposed scheme improves the performance in terms of mean packet delay, packet loss ratio and throughput.
U.S. Geological Survey, Department of the Interior — This data set represents the average monthly minimum temperature in Celsius multiplied by 100 for 2002 compiled for every catchment of NHDPlus for the conterminous...
On the maximum and minimum of two modified Gamma-Gamma variates with applications
Al-Quwaiee, Hessa
2014-04-01
In this work, we derive the statistical characteristics of the maximum and the minimum of two modified1 Gamma-Gamma variates in closed-form in terms of Meijer\\'s G-function and the extended generalized bivariate Meijer\\'s G-function. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii) a dual-hop free-space optical relay transmission system. Computer-based Monte-Carlo simulations verify our new analytical results.
Realization of Minimum and Maximum Gate Function in Ta2O5-based Memristive Devices
Breuer, Thomas; Nielen, Lutz; Roesgen, Bernd; Waser, Rainer; Rana, Vikas; Linn, Eike
2016-04-01
Redox-based resistive switching devices (ReRAM) are considered key enablers for future non-volatile memory and logic applications. Functionally enhanced ReRAM devices could enable new hardware concepts, e.g. logic-in-memory or neuromorphic applications. In this work, we demonstrate the implementation of ReRAM-based fuzzy logic gates using Ta2O5 devices to enable analogous Minimum and Maximum operations. The realized gates consist of two anti-serially connected ReRAM cells offering two inputs and one output. The cells offer an endurance up to 106 cycles. By means of exemplary input signals, each gate functionality is verified and signal constraints are highlighted. This realization could improve the efficiency of analogous processing tasks such as sorting networks in the future.
Verification of surface minimum, mean, and maximum temperature forecasts in Calabria for summer 2008
S. Federico
2011-02-01
Full Text Available Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy, modelled by the Regional Atmospheric Modeling System (RAMS, have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council and are available online at http://meteo.crati.it/previsioni.html (every six hours. Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs. For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km.
Gridded analysis is based on Optimal Interpolation (OI and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields.
Two case studies, the first one with a low (less than the 10th percentile root mean square error (RMSE in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered
Asymptotic Behavior of the Maximum and Minimum Singular Value of Random Vandermonde Matrices
Tucci, Gabriel H
2012-01-01
This work examines various statistical distributions in connection with random Vandermonde matrices and their extension to $d$-dimensional phase distributions. Upper and lower bound asymptotics for the maximum singular value are found to be $O(\\log N^d)$ and $O(\\log N^{d} /\\log \\log N^d)$ respectively where $N$ is the dimension of the matrix, generalizing the results in \\cite{TW}. We further study the behavior of the minimum singular value of a random Vandermonde matrix. In particular, we prove that the minimum singular value is at most $N^2\\exp(-C\\sqrt{N}))$ where $N$ is the dimension of the matrix and $C$ is a constant. Furthermore, the value of the constant $C$ is determined explicitly. The main result is obtained in two different ways. One approach uses techniques from stochastic processes and in particular, a construction related with the Brownian bridge. The other one is a more direct analytical approach involving combinatorics and complex analysis. As a consequence, we obtain a lower bound for the maxi...
Examining the prey mass of terrestrial and aquatic carnivorous mammals: minimum, maximum and range.
Tucker, Marlee A; Rogers, Tracey L
2014-01-01
Predator-prey body mass relationships are a vital part of food webs across ecosystems and provide key information for predicting the susceptibility of carnivore populations to extinction. Despite this, there has been limited research on the minimum and maximum prey size of mammalian carnivores. Without information on large-scale patterns of prey mass, we limit our understanding of predation pressure, trophic cascades and susceptibility of carnivores to decreasing prey populations. The majority of studies that examine predator-prey body mass relationships focus on either a single or a subset of mammalian species, which limits the strength of our models as well as their broader application. We examine the relationship between predator body mass and the minimum, maximum and range of their prey's body mass across 108 mammalian carnivores, from weasels to baleen whales (Carnivora and Cetacea). We test whether mammals show a positive relationship between prey and predator body mass, as in reptiles and birds, as well as examine how environment (aquatic and terrestrial) and phylogenetic relatedness play a role in this relationship. We found that phylogenetic relatedness is a strong driver of predator-prey mass patterns in carnivorous mammals and accounts for a higher proportion of variance compared with the biological drivers of body mass and environment. We show a positive predator-prey body mass pattern for terrestrial mammals as found in reptiles and birds, but no relationship for aquatic mammals. Our results will benefit our understanding of trophic interactions, the susceptibility of carnivores to population declines and the role of carnivores within ecosystems.
National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...
Estimating minimum and maximum air temperature using MODIS data over Indo-Gangetic Plain
D B Shah; M R Pandya; H J Trivedi; A R Jani
2013-12-01
Spatially distributed air temperature data are required for climatological, hydrological and environmental studies. However, high spatial distribution patterns of air temperature are not available from meteorological stations due to its sparse network. The objective of this study was to estimate high spatial resolution minimum air temperature (min) and maximum air temperature (max) over the Indo-Gangetic Plain using Moderate Resolution Imaging Spectroradiometer (MODIS) data and India Meteorological Department (IMD) ground station data. min was estimated by establishing an empirical relationship between IMD min and night-time MODIS Land Surface Temperature (s). While, max was estimated using the Temperature-Vegetation Index (TVX) approach. The TVX approach is based on the linear relationship between s and Normalized Difference Vegetation Index (NDVI) data where max is estimated by extrapolating the NDVI-s regression line to maximum value of NDVImax for effective full vegetation cover. The present study also proposed a methodology to estimate NDVImax using IMD measured max for the Indo-Gangetic Plain. Comparison of MODIS estimated min with IMD measured min showed mean absolute error (MAE) of 1.73°C and a root mean square error (RMSE) of 2.2°C. Analysis in the study for max estimation showed that calibrated NDVImax performed well, with the MAE of 1.79°C and RMSE of 2.16°C.
THE 2003 -2007 MINIMUM, MAXIMUM AND MEDIUM DISCHARGE ANALYSIS OF THE LATORIŢA-LOTRU WATER SYSTEM
Simona-Elena MIHĂESCU
2010-06-01
Full Text Available The 2003 -2007 minimum, maximum and medium discharge analysis of the Latoriţa-Lotru water system From a functional point of view, the Lotru and Latoriţa make up a water system by the junction of the two high hydro energetic potential water flows. The Lotru springs from the Parâng Massif with a spring quota of over 1900m and an outfall quota of 298m, which makes for an altitude difference of 1602m; it is the affluent of the Olt River, has a course length of 76 km and a minimum discharge of 20m3/s. Its reception hollow is of 1024 km2. Latoriţa springs from the Latoriţa Mountains, it is a small river with an average discharge of 2.7m3/s and is an affluent of the Lotru. Together, the two make up a high hydro energetic potential system, valorized in the system of lakes which serve the Ciunget Hydro-Electric Power Plant. Galbenu and Petrimanu are two reservoirs built on the Latoriţa River and on the Lotru, we have Vidra reservoir, Balindru, Mălaia and Brădişor. The discharge analysis of these rivers is very important in view of a good risk management, especially consisting in floods and high level waters, even in the case of artificial water flows such as the Latoriţa-Lotru water system.
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the
AlPOs Synthetic Factor Analysis Based on Maximum Weight and Minimum Redundancy Feature Selection
Yinghua Lv
2013-11-01
Full Text Available The relationship between synthetic factors and the resulting structures is critical for rational synthesis of zeolites and related microporous materials. In this paper, we develop a new feature selection method for synthetic factor analysis of (6,12-ring-containing microporous aluminophosphates (AlPOs. The proposed method is based on a maximum weight and minimum redundancy criterion. With the proposed method, we can select the feature subset in which the features are most relevant to the synthetic structure while the redundancy among these selected features is minimal. Based on the database of AlPO synthesis, we use (6,12-ring-containing AlPOs as the target class and incorporate 21 synthetic factors including gel composition, solvent and organic template to predict the formation of (6,12-ring-containing microporous aluminophosphates (AlPOs. From these 21 features, 12 selected features are deemed as the optimized features to distinguish (6,12-ring-containing AlPOs from other AlPOs without such rings. The prediction model achieves a classification accuracy rate of 91.12% using the optimal feature subset. Comprehensive experiments demonstrate the effectiveness of the proposed algorithm, and deep analysis is given for the synthetic factors selected by the proposed method.
Lussana, C.
2013-04-01
The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.
U.S. Geological Survey, Department of the Interior — This data set represents the average monthly maximum temperature in Celsius multiplied by 100 for 2002 compiled for every catchment of NHDPlus for the conterminous...
Rui A. P. Perdigão
2012-06-01
Full Text Available The application of the Maximum Entropy (ME principle leads to a minimum of the Mutual Information (MI, I(X,Y, between random variables X,Y, which is compatible with prescribed joint expectations and given ME marginal distributions. A sequence of sets of joint constraints leads to a hierarchy of lower MI bounds increasingly approaching the true MI. In particular, using standard bivariate Gaussian marginal distributions, it allows for the MI decomposition into two positive terms: the Gaussian MI (I_{g}, depending upon the Gaussian correlation or the correlation between ‘Gaussianized variables’, and a non‑Gaussian MI (I_{ng}, coinciding with joint negentropy and depending upon nonlinear correlations. Joint moments of a prescribed total order p are bounded within a compact set defined by Schwarz-like inequalities, where I_{ng} grows from zero at the ‘Gaussian manifold’ where moments are those of Gaussian distributions, towards infinity at the set’s boundary where a deterministic relationship holds. Sources of joint non-Gaussianity have been systematized by estimating I_{ng} between the input and output from a nonlinear synthetic channel contaminated by multiplicative and non-Gaussian additive noises for a full range of signal-to-noise ratio (snr variances. We have studied the effect of varying snr on I_{g} and I_{ng} under several signal/noise scenarios.
Carlos A. L. Pires
2013-02-01
Full Text Available The Minimum Mutual Information (MinMI Principle provides the least committed, maximum-joint-entropy (ME inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values generated by constraining sets Tcr comprehended by mcr linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy are imposed by single morphisms of the original random variables. N-asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing Tcr leads to an increasing MinMI, converging eventually to the total MI. Under N-sized samples, the MinMI increment relative to two encapsulated sets Tcr1 ⊂ Tcr2 (with numbers of constraints mcr1
Weak minimum aberration and maximum number of clear two-factor interactions in 2m-p Ⅳ designs
YANG Guijun; LIU Minqian; ZHANG Runchu
2005-01-01
Both the clear effects and minimum aberration criteria are the important rules for the design selection. In this paper, it is proved that some 2m-p Ⅳ designs have weak minimum aberration, by considering the number of clear two-factor interactions in the designs.And some conditions are provided, under which a 2m-p Ⅳ design can have the maximum number of clear two-factor interactions and weak minimum aberration at the same time.Some weak minimum aberration 2m-p Ⅳ designs are provided for illustrations and two nonisomorphic weak minimum aberration 213-6 Ⅳ designs are constructed at the end of this paper.
The effects of disjunct sampling and averaging time on maximum mean wind speeds
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...... period before being saved. In either case, the extreme wind will be underestimated. This paper investigates the effects of the disjunct sampling interval and the averaging time on the attenuation of the extreme wind estimation by means of a simple theoretical approach as well as measurements...
OPTIMIZED FUEL INJECTOR DESIGN FOR MAXIMUM IN-FURNACE NOx REDUCTION AND MINIMUM UNBURNED CARBON
SAROFIM, A F; LISAUSKAS, R; RILEY, D; EDDINGS, E G; BROUWER, J; KLEWICKI, J P; DAVIS, K A; BOCKELIE, M J; HEAP, M P; PERSHING, D
1998-01-01
Reaction Engineering International (REI) has established a project team of experts to develop a technology for combustion systems which will minimize NO x emissions and minimize carbon in the fly ash. This much need technology will allow users to meet environmental compliance and produce a saleable by-product. This study is concerned with the NO x control technology of choice for pulverized coal fired boilers,"in-furnace NO_{x} control," which includes: staged low-NO_{x} burners, reburning, selective non-catalytic reduction (SNCR) and hybrid approaches (e.g., reburning with SNCR). The program has two primary objectives: 1) To improve the performance of "in-furnace" NO_{x} control, processes. 2) To devise new, or improve existing, approaches for maximum "in-furnace" NO_{x} control and minimum unburned carbon. The program involves: 1) fundamental studies at laboratory- and bench-scale to define NO reduction mechanisms in flames and reburning jets; 2) laboratory experiments and computer modeling to improve our two-phase mixing predictive capability; 3) evaluation of commercial low-NO_{x} burner fuel injectors to develop improved designs, and 4) demonstration of coal injectors for reburning and low-NO_{x} burners at commercial scale. The specific objectives of the two-phase program are to: 1 Conduct research to better understand the interaction of heterogeneous chemistry and two phase mixing on NO reduction processes in pulverized coal combustion. 2 Improve our ability to predict combusting coal jets by verifying two phase mixing models under conditions that simulate the near field of low-NO_{x} burners. 3 Determine the limits on NO control by in-furnace NO_{x} control technologies as a function of furnace design and coal type. 5 Develop and demonstrate improved coal injector designs for commercial low-NO_{x} burners and coal reburning systems. 6 Modify the char burnout model in REI's coal
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...
Sung Woo Park
2015-03-01
Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
J C Joshi; Tankeshwar Kumar; Sunita Srivastava; Divya Sachdeva
2017-02-01
Maximum and minimum temperatures are used in avalanche forecasting models for snow avalanche hazard mitigation over Himalaya. The present work is a part of development of Hidden Markov Model (HMM) based avalanche forecasting system for Pir-Panjal and Great Himalayan mountain ranges of the Himalaya. In this work, HMMs have been developed for forecasting of maximum and minimum temperatures for Kanzalwan in Pir-Panjal range and Drass in Great Himalayan range with a lead time of two days. The HMMs have been developed using meteorological variables collected from these stations during the past 20 winters from 1992 to 2012. The meteorological variables have been used to define observations and states of the models and to compute model parameters (initial state, state transition and observation probabilities). The model parameters have been used in the Forward and the Viterbi algorithms to generate temperature forecasts. To improve the model forecasts, the model parameters have been optimised using Baum–Welch algorithm. The models have been compared with persistence forecast by root mean square errors (RMSE) analysis using independent data of two winters (2012–13, 2013–14). The HMM for maximum temperature has shown a 4–12% and 17–19% improvement in the forecast over persistence forecast, for day-1 and day-2, respectively. For minimum temperature, it has shown 6–38% and 5–12% improvement for day-1 and day-2, respectively.
Applying Tabu Heuristic to Wind Influenced, Minimum Risk and Maximum Expected Coverage Routes
1997-02-01
31 111.3. GENERAL VEHICLE ROUTING PROBLEM .......................................................... 33...r. L3. Background The vehicle routing problem with maximum coverage is an extension of the classical general vehicle routing problem (GVRP). A general...review from the literature for the general vehicle routing problem includes Eilon et al. (1971) and Bodin (1983). Within a traditional deterministic
U.S. Geological Survey, Department of the Interior — This tabular data set represents thecatchment-average for the 30-year (1971-2000) average daily minimum temperature in Celsius multiplied by 100 compiled for every...
Cavalli, Andrea; Camilloni, Carlo; Vendruscolo, Michele
2013-03-07
In order to characterise the dynamics of proteins, a well-established method is to incorporate experimental parameters as replica-averaged structural restraints into molecular dynamics simulations. Here, we justify this approach in the case of interproton distance information provided by nuclear Overhauser effects by showing that it generates ensembles of conformations according to the maximum entropy principle. These results indicate that the use of replica-averaged structural restraints in molecular dynamics simulations, given a force field and a set of experimental data, can provide an accurate approximation of the unknown Boltzmann distribution of a system.
Kumar, Sanjay
2016-06-01
Present paper inspects the prediction capability of the latest version of the International Reference Ionosphere (IRI-2012) model in predicting the total electron content (TEC) over seven different equatorial regions across the globe during a very low solar activity phase 2009 and a high solar activity phase 2012. This has been carried out by comparing the ground-based Global Positioning System (GPS)-derived VTEC with those from the IRI-2012 model. The observed GPS-TEC shows the presence of winter anomaly which is prominent during the solar maximum year 2012 and disappeared during solar minimum year 2009. The monthly and seasonal mean of the IRI-2012 model TEC with IRI-NeQ topside has been compared with the GPS-TEC, and our results showed that the monthly and seasonal mean value of the IRI-2012 model overestimates the observed GPS-TEC at all the equatorial stations. The discrepancy (or over estimation) in the IRI-2012 model is found larger during solar maximum year 2012 than that during solar minimum year 2009. This is a contradiction to the results recently presented by Tariku (2015) over equatorial regions of Uganda. The discrepancy is found maximum during the December solstice and a minimum during the March equinox. The magnitude of discrepancy in the IRI-2012 model showed longitudinal dependent which maximized in western longitude sector during both the years 2009 and 2012. The significant discrepancy in the IRI-2012 model observed during the solar minimum year 2009 could be attributed to larger difference between F10.7 flux and EUV flux (26-34 nm) during low solar activity period 2007-2009 than that during high solar activity period 2010-2012. This suggests that to represent the solar activity impact in the IRI model, implementation of new solar activity indices is further required for its better performance.
ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP
Gorbachov, P.
2013-01-01
Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.
Yatracos, Yannis G.
2013-01-01
The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...
Germec-Cakan, Derya; Taner, Tulin; Akan, Seden
2011-10-01
The aim of this study was to investigate upper respiratory airway dimensions in non-extraction and extraction subjects treated with minimum or maximum anchorage. Lateral cephalograms of 39 Class I subjects were divided into three groups (each containing 11 females and 2 males) according to treatment procedure: group 1, 13 patients treated with extraction of four premolars and minimum anchorage; group 2, 13 cases treated non-extraction with air-rotor stripping (ARS); and group 3, 13 bimaxillary protrusion subjects treated with extraction of four premolars and maximum anchorage. The mean ages of the patients were 18.1 ± 3.7, 17.8 ± 2.4, and 15.5 ± 0.88 years, respectively. Tongue, soft palate, hyoid position, and upper airway measurements were made on pre- and post-treatment lateral cephalograms and the differences between the mean measurements were tested using Wilcoxon signed-ranks test. Superior and middle airway space increased significantly (P extraction treatment using maximum anchorage has a reducing effect on the middle and inferior airway dimensions.
Liu Zhong-Bao
2016-06-01
Support Vector Machine (SVM) is one of the important stellar spectral classification methods, and it is widely used in practice. But its classification efficiencies cannot be greatly improved because it does not take the class distribution into consideration. In view of this, a modified SVM-named Minimum within-class and Maximum between-class scatter Support Vector Machine (MMSVM) is constructed to deal with the above problem. MMSVM merges the advantages of Fisher’s Discriminant Analysis (FDA) and SVM, and the comparative experiments on the Sloan Digital Sky Survey (SDSS) show that MMSVM performs better than SVM.
Zhong-Bao, Liu
2016-06-01
Support Vector Machine (SVM) is one of the important stellar spectral classification methods, and it is widely used in practice. But its classification efficiencies cannot be greatly improved because it does not take the class distribution into consideration. In view of this, a modified SVM named Minimum within-class and Maximum between-class scatter Support Vector Machine (MMSVM) is constructed to deal with the above problem. MMSVM merges the advantages of Fisher's Discriminant Analysis (FDA) and SVM, and the comparative experiments on the Sloan Digital Sky Survey (SDSS) show that MMSVM performs better than SVM.
Zhang, Yafei; Zhang, Fangqing; Chen, Guanghua
1994-12-01
It is proposed in this paper that the minimum substrate temperature for diamond growth from hydrogen-hydrocarbon gas mixtures be determined by the packing arrangements of hydrocarbon fragments at the surface, and the maximum substrate temperature be limited by the diamond growth surface reconstruction, which can be prevented by saturating the surface dangling bonds with atomic hydrogen. Theoretical calculations have been done by a formula proposed by Dryburgh [J. Crystal Growth 130 (1993) 305], and the results show that diamond can be deposited at the substrate temperatures ranging from ≈ 400 to ≈ 1200°C by low pressure chemical vapor deposition. This is consistent with experimental observations.
Aeronomical constraints to the minimum mass and maximum radius of hot low-mass planets
Fossati, L.; Erkaev, N. V.; Lammer, H.; Cubillos, P. E.; Odert, P.; Juvan, I.; Kislyakova, K. G.; Lendl, M.; Kubyshkina, D.; Bauer, S. J.
2017-02-01
Stimulated by the discovery of a number of close-in low-density planets, we generalise the Jeans escape parameter taking hydrodynamic and Roche lobe effects into account. We furthermore define Λ as the value of the Jeans escape parameter calculated at the observed planetary radius and mass for the planet's equilibrium temperature and considering atomic hydrogen, independently of the atmospheric temperature profile. We consider 5 and 10 M⊕ planets with an equilibrium temperature of 500 and 1000 K, orbiting early G-, K-, and M-type stars. Assuming a clear atmosphere and by comparing escape rates obtained from the energy-limited formula, which only accounts for the heating induced by the absorption of the high-energy stellar radiation, and from a hydrodynamic atmosphere code, which also accounts for the bolometric heating, we find that planets whose Λ is smaller than 15-35 lie in the "boil-off" regime, where the escape is driven by the atmospheric thermal energy and low planetary gravity. We find that the atmosphere of hot (i.e. Teq ⪆ 1000 K) low-mass (Mpl ⪅ 5 M⊕) planets with Λmass (Mpl ⪅ 10 M⊕) planets with Λmass and maximum radius and can be used to predict the presence of aerosols and/or constrain planetary masses, for example.
Santos W. N. dos
2003-01-01
Full Text Available The hot wire technique is considered to be an effective and accurate means of determining the thermal conductivity of ceramic materials. However, specifically for materials of high thermal diffusivity, the appropriate time interval to be considered in calculations is a decisive factor for getting accurate and consistent results. In this work, a numerical simulation model is proposed with the aim of determining the minimum and maximum measuring time for the hot wire parallel technique. The temperature profile generated by this model is in excellent agreement with that one experimentally obtained by this technique, where thermal conductivity, thermal diffusivity and specific heat are simultaneously determined from the same experimental temperature transient. Eighteen different specimens of refractory materials and polymers, with thermal diffusivities ranging from 1x10-7 to 70x10-7 m²/s, in shape of rectangular parallelepipeds, and with different dimensions were employed in the experimental programme. An empirical equation relating minimum and maximum measuring times and the thermal diffusivity of the sample is also obtained.
G. O. Walker
Full Text Available Median hourly, electron content-latitude profiles obtained in South East Asia under solar minimum and maximum conditions have been used to establish seasonal and solar differences in the diurnal variations of the ionospheric equatorial anomaly (EIA. The seasonal changes have been mainly accounted for from a consideration of the daytime meridional wind, affecting the EIA diffusion of ionization from the magnetic equator down the magnetic field lines towards the crests. Depending upon the seasonal location of the subsolar point in relation to the magnetic equator diffusion rates were increased or decreased. This led to crest asymmetries at the solstices with (1 the winter crest enhanced in the morning (increased diffusion rate and (2 the same crest decaying most rapidly in the late afternoon (faster recombination rate at lower ionospheric levels. Such asymmetries were also observed, to a lesser extent, at the equinoxes since the magnetic equator (located at about 9°N lat does not coincide with the geographic equator. Another factor affecting the magnitude of a particular electron content crest was the proximity of the subsolar point, since this increased the local ionization production rate. Enhancements of the EIA took place around sunset, mainly during the equinoxes and more frequently at solar maximum, and also there was evidence of apparent EIA crest resurgences around 0300 LST for all seasons at solar maximum. The latter are thought to be associated with the commonly observed, post-midnight, ionization enhancements at midlatitudes, ionization being transported to low latitudes by an equatorward wind. The ratio increases in crest peak electron contents from solar minimum to maximum of 2.7 at the equinoxes, 2.0 at the northern summer solstice and 1.7 at northern winter solstice can be explained, only partly, by increases in the magnitude of the eastward electric field E overhead the magnetic equator affecting the [
U.S. Geological Survey, Department of the Interior — This data set represents the 30-year (1971-2000) average annual minimum temperature in Celsius multiplied by 100 compiled for every catchment of NHDPlus for the...
U.S. Geological Survey, Department of the Interior — This tabular data set represents the average daily minimum temperature in Celsius multiplied by 100 for 2002, compiled for every MRB_E2RF1 catchment of selected...
Minimum Grading, Maximum Learning
Carey, Theodore; Carifio, James
2011-01-01
Fair and effective schools should assign grades that align with clear and consistent evidence of student performance (Wormeli, 2006), but when a student's performance is inconsistent, traditional grading practices can prove inadequate. Understanding this, increasing numbers of schools have been experimenting with the practice of assigning minimum…
Maximum outreach. . . minimum budget
Laychak, Mary Beth
2011-06-01
Many astronomical institutions have budgetary constraints that prevent them from spending large amounts on public outreach. This is especially true for smaller organizations, such as the Canada-France-Hawaii Telescope (CFHT), where manpower and funding are at a premium. To maximize our impact, we employ unconventional and affordable outreach techniques that underscore our commitment to astronomy education and our local community. We participate in many unique community interactions, ranging from rodeo calf-dressing tournaments to art gallery exhibitions of CFHT images. Further, we have developed many creative methods to communicate complex astronomical concepts to both children and adults, including the use of a modified webcam to teach infrared astronomy and the production of online newsletter for parents, children, and educators. This presentation will discuss the outreach methods CFHT has found most effective in our local schools and our rural community.
Aggarwal, Namita; Rana, Bharti; Agrawal, R K; Kumaran, Senthil
2015-01-01
In this paper, we propose a three-phased method for diagnosis of Alzheimer's disease using the structural magnetic resonance imaging (MRI). In first phase, gray matter tissue probability map is obtained from every brain MRI volume. Further, five regions of interest (ROIs) are extracted as per prior knowledge. In second phase, features are extracted from each ROI using 3D dual-tree discrete wavelet transform. In third phase, relevant features are selected using minimum redundancy maximum relevance features selection technique. The decision model is built with features so obtained, using a classifier. To evaluate the effectiveness of the proposed method, experiments are performed with four well-known classifiers on four data sets, built from a publicly available OASIS database. The performance is evaluated in terms of sensitivity, specificity and classification accuracy. It was observed that the proposed method outperforms existing methods in terms of all three performance measures. This is further validated with statistical tests.
S. Vignesh
2017-04-01
Full Text Available Flow based Erosion – corrosion problems are very common in fluid handling equipments such as propellers, impellers, pumps in warships, submarine. Though there are many coating materials available to combat erosion–corrosion damage in the above components, iron based amorphous coatings are considered to be more effective to combat erosion–corrosion problems. High velocity oxy-fuel (HVOF spray process is considered to be a better process to coat the iron based amorphous powders. In this investigation, iron based amorphous metallic coating was developed on 316 stainless steel substrate using HVOF spray technique. Empirical relationships were developed to predict the porosity and micro hardness of iron based amorphous coating incorporating HVOF spray parameters such as oxygen flow rate, fuel flow rate, powder feed rate, carrier gas flow rate, and spray distance. Response surface methodology (RSM was used to identify the optimal HVOF spray parameters to attain coating with minimum porosity and maximum hardness.
Oka, Kiyoshi; Yakushiji, Toshitake; Sato, Hiro; Mizuta, Hiroshi [Kumamoto University, Department of Orthopaedic and Neuro-Musculoskeletal Surgery, Faculty of Medical and Pharmaceutical Sciences, Kumamoto (Japan); Hirai, Toshinori; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical and Pharmaceutical Sciences, Kumamoto (Japan)
2010-02-15
The objective of this study was to evaluate whether the average apparent diffusion coefficient (ADC) or the minimum ADC is more useful for evaluating the chemotherapeutic response of osteosarcoma. Twenty-two patients with osteosarcoma were examined in this study. Diffusion-weighted (DW) and magnetic resonance (MR) images were performed for all patients before and after chemotherapy. The pre- and post-chemotherapy values were obtained both in the average and minimum ADC. The pre-chemotherapy values of the average ADC and minimum ADC respectively were compared with the post-chemotherapy values. In addition, the ADC ratios ([ADC{sub post} - ADC{sub pre}] / ADC{sub pre}) were calculated using the average ADC and the minimum ADC. Twenty-two patients with osteosarcomas were divided into two groups, those with a good response to chemotherapy ({>=} 90% tumor necrosis, n = 7) and those with a poor response (< 90% tumor necrosis, n = 15). The average ADC ratio and the minimum ADC ratio of the two groups were compared. With both the average ADC and the minimum ADC, post-chemotherapy values were significantly higher than pre-chemotherapy values (P < 0.05). The patients with a good response had a significantly higher minimum ADC ratio than those with a poor response (1.01 {+-} 0.22 and 0.55 {+-} 0.29 respectively, P < 0.05). However, with regard to the average ADC ratio, no significant difference was observed between the two groups (0.66 {+-} 0.18 and 0.46 {+-} 0.31 respectively, P = 0.19). The minimum ADC is useful for evaluating the chemotherapeutic response of osteosarcoma. (orig.)
Luo, Xiaodong; Lorentzen, Rolf J; Nævdal, Geir
2015-01-01
The focus of this work is on an alternative implementation of the iterative ensemble smoother (iES). We show that iteration formulae similar to those used in \\cite{chen2013-levenberg,emerick2012ensemble} can be derived by adopting a regularized Levenberg-Marquardt (RLM) algorithm \\cite{jin2010regularized} to approximately solve a minimum-average-cost (MAC) problem. This not only leads to an alternative theoretical tool in understanding and analyzing the behaviour of the aforementioned iES, but also provides insights and guidelines for further developments of the smoothing algorithms. For illustration, we compare the performance of an implementation of the RLM-MAC algorithm to that of the approximate iES used in \\cite{chen2013-levenberg} in three numerical examples: an initial condition estimation problem in a strongly nonlinear system, a facies estimation problem in a 2D reservoir and the history matching problem in the Brugge field case. In these three specific cases, the RLM-MAC algorithm exhibits comparabl...
Maher Brion S
2005-12-01
Full Text Available Abstract In order to detect linkage of the simulated complex disease Kofendrerd Personality Disorder across studies from multiple populations, we performed a genome scan meta-analysis (GSMA. Using the 7-cM microsatellite map, nonparametric multipoint linkage analyses were performed separately on each of the four simulated populations independently to determine p-values. The genome of each population was divided into 20-cM bin regions, and each bin was rank-ordered based on the most significant linkage p-value for that population in that region. The bin ranks were then averaged across all four studies to determine the most significant 20-cM regions over all studies. Statistical significance of the averaged bin ranks was determined from a normal distribution of randomly assigned rank averages. To narrow the region of interest for fine-mapping, the meta-analysis was repeated two additional times, with each of the 20-cM bins offset by 7 cM and 13 cM, respectively, creating regions of overlap with the original method. The 6–7 cM shared regions, where the highest averaged 20-cM bins from each of the three offsets overlap, designated the minimum region of maximum significance (MRMS. Application of the GSMA-MRMS method revealed genome wide significance (p-values refer to the average rank assigned to the bin at regions including or adjacent to all of the simulated disease loci: chromosome 1 (p p-value p-value p-value
Cooper, Margaret E; Goldstein, Toby H; Maher, Brion S; Marazita, Mary L
2005-12-30
In order to detect linkage of the simulated complex disease Kofendrerd Personality Disorder across studies from multiple populations, we performed a genome scan meta-analysis (GSMA). Using the 7-cM microsatellite map, nonparametric multipoint linkage analyses were performed separately on each of the four simulated populations independently to determine p-values. The genome of each population was divided into 20-cM bin regions, and each bin was rank-ordered based on the most significant linkage p-value for that population in that region. The bin ranks were then averaged across all four studies to determine the most significant 20-cM regions over all studies. Statistical significance of the averaged bin ranks was determined from a normal distribution of randomly assigned rank averages. To narrow the region of interest for fine-mapping, the meta-analysis was repeated two additional times, with each of the 20-cM bins offset by 7 cM and 13 cM, respectively, creating regions of overlap with the original method. The 6-7 cM shared regions, where the highest averaged 20-cM bins from each of the three offsets overlap, designated the minimum region of maximum significance (MRMS). Application of the GSMA-MRMS method revealed genome wide significance (p-values refer to the average rank assigned to the bin) at regions including or adjacent to all of the simulated disease loci: chromosome 1 (p value value value < 0.05 for 7-14 cM, the region adjacent to D4). This GSMA analysis approach demonstrates the power of linkage meta-analysis to detect multiple genes simultaneously for a complex disorder. The MRMS method enhances this powerful tool to focus on more localized regions of linkage.
Muin F. Ubeid
2012-12-01
Full Text Available The optical transmission properties of a structure consisting of N identical pairs of left- and right-materials are investigated theoretically and numerically. Maxwell's equations are used to determine the electric and magnetic fields of the incident waves at each layer. Snell's law is applied and the boundary conditions are imposed at each layer interface to calculate Fresnel coefficients. Expressions for reflectance and transmittance of the structure are given in terms of these coefficients. In the numerical results the transmittance of the structure is computed and illustrated as a function of frequency under different values of N. Minimum transmittance is achieved by using high and low opposite refractive indices of left and right materials of each pair of the structure. The frequency band of this transmittance is reduced by decreasing N. Maximum transmittance is demonstrated by using two slabs of the same width and opposite refractive indices placed between two dielectric media of the same kind. The effect of frequency and angle of incidence is very weak in these structures as compared to their all-dielectric counterparts. Moreover the obtained results are in agreement with the law of conservation of energy.
Hejazi, Mohamad I.; Cai, Ximing
2009-04-01
Input variable selection (IVS) is a necessary step in modeling water resources systems. Neglecting this step may lead to unnecessary model complexity and reduced model accuracy. In this paper, we apply the minimum redundancy maximum relevance (MRMR) algorithm to identifying the most relevant set of inputs in modeling a water resources system. We further introduce two modified versions of the MRMR algorithm ( α-MRMR and β-MRMR), where α and β are correction factors that are found to increase and decrease as a power-law function, respectively, with the progress of the input selection algorithms and the increase of the number of selected input variables. We apply the proposed algorithms to 22 reservoirs in California to predict daily releases based on a set from a 121 potential input variables. Results indicate that the two proposed algorithms are good measures of model inputs as reflected in enhanced model performance. The α-MRMR and β-MRMR values exhibit strong negative correlation to model performance as depicted in lower root-mean-square-error (RMSE) values.
ShaoPeng Wang
2016-01-01
Full Text Available The development of biochemistry and molecular biology has revealed an increasingly important role of compounds in several biological processes. Like the aptamer-protein interaction, aptamer-compound interaction attracts increasing attention. However, it is time-consuming to select proper aptamers against compounds using traditional methods, such as exponential enrichment. Thus, there is an urgent need to design effective computational methods for searching effective aptamers against compounds. This study attempted to extract important features for aptamer-compound interactions using feature selection methods, such as Maximum Relevance Minimum Redundancy, as well as incremental feature selection. Each aptamer-compound pair was represented by properties derived from the aptamer and compound, including frequencies of single nucleotides and dinucleotides for the aptamer, as well as the constitutional, electrostatic, quantum-chemical, and space conformational descriptors of the compounds. As a result, some important features were obtained. To confirm the importance of the obtained features, we further discussed the associations between them and aptamer-compound interactions. Simultaneously, an optimal prediction model based on the nearest neighbor algorithm was built to identify aptamer-compound interactions, which has the potential to be a useful tool for the identification of novel aptamer-compound interactions. The program is available upon the request.
Liu, Lili; Chen, Lei; Zhang, Yu-Hang; Wei, Lai; Cheng, Shiwen; Kong, Xiangyin; Zheng, Mingyue; Huang, Tao; Cai, Yu-Dong
2017-02-01
Drug-drug interaction (DDI) defines a situation in which one drug affects the activity of another when both are administered together. DDI is a common cause of adverse drug reactions and sometimes also leads to improved therapeutic effects. Therefore, it is of great interest to discover novel DDIs according to their molecular properties and mechanisms in a robust and rigorous way. This paper attempts to predict effective DDIs using the following properties: (1) chemical interaction between drugs; (2) protein interactions between the targets of drugs; and (3) target enrichment of KEGG pathways. The data consisted of 7323 pairs of DDIs collected from the DrugBank and 36,615 pairs of drugs constructed by randomly combining two drugs. Each drug pair was represented by 465 features derived from the aforementioned three categories of properties. The random forest algorithm was adopted to train the prediction model. Some feature selection techniques, including minimum redundancy maximum relevance and incremental feature selection, were used to extract key features as the optimal input for the prediction model. The extracted key features may help to gain insights into the mechanisms of DDIs and provide some guidelines for the relevant clinical medication developments, and the prediction model can give new clues for identification of novel DDIs.
Xin Ma
2015-01-01
Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.
U.S. Geological Survey, Department of the Interior — This data set represents the 30-year (1971-2000) average annual maximum temperature in Celsius multiplied by 100 compiled for every catchment of NHDPlus for the...
Jaagus, Jaak; Briede, Agrita; Rimkus, Egidijus; Remm, Kalle
2014-10-01
Spatial distribution and trends in mean and absolute maximum and minimum temperatures and in the diurnal temperature range were analysed at 47 stations in the eastern Baltic region (Lithuania, Latvia and Estonia) during 1951-2010. Dependence of the studied variables on geographical factors (latitude, the Baltic Sea, land elevation) is discussed. Statistically significant increasing trends in maximum and minimum temperatures were detected for March, April, July, August and annual values. At the majority of stations, the increase was detected also in February and May in case of maximum temperature and in January and May in case of minimum temperature. Warming was slightly higher in the northern part of the study area, i.e. in Estonia. Trends in the diurnal temperature range differ seasonally. The highest increasing trend revealed in April and, at some stations, also in May, July and August. Negative and mostly insignificant changes have occurred in January, February, March and June. The annual temperature range has not changed.
Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)
2013-10-15
The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across
Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.
2002-01-01
laboratories comparable. The minimum and maximum concentrations of a selected isotope in naturally occurring terrestrial materials for selected chemical elements reviewed in this report are given below: Isotope Minimum mole fraction Maximum mole fraction -------------------------------------------------------------------------------- 2H 0 .000 0255 0 .000 1838 7Li 0 .9227 0 .9278 11B 0 .7961 0 .8107 13C 0 .009 629 0 .011 466 15N 0 .003 462 0 .004 210 18O 0 .001 875 0 .002 218 26Mg 0 .1099 0 .1103 30Si 0 .030 816 0 .031 023 34S 0 .0398 0 .0473 37Cl 0 .240 77 0 .243 56 44Ca 0 .020 82 0 .020 92 53Cr 0 .095 01 0 .095 53 56Fe 0 .917 42 0 .917 60 65Cu 0 .3066 0 .3102 205Tl 0 .704 72 0 .705 06 The numerical values above have uncertainties that depend upon the uncertainties of the determinations of the absolute isotope-abundance variations of reference materials of the elements. Because reference materials used for absolute isotope-abundance measurements have not been included in relative isotope abundance investigations of zinc, selenium, molybdenum, palladium, and tellurium, ranges in isotopic composition are not listed for these elements, although such ranges may be measurable with state-of-the-art mass spectrometry. This report is available at the url: http://pubs.water.usgs.gov/wri014222.
Er, Hale Çolakoğlu; Erden, Ayşe; Küçük, N Özlem; Geçim, Ethem
2014-01-01
The aim of this study was to retrospectively assess the correlation between minimum apparent diffusion coefficient (ADCmin) values obtained from diffusion-weighted magnetic resonance imaging (MRI) and maximum standardized uptake values (SUVmax) obtained from positron emission tomography-computed tomography (PET-CT) in rectal cancer. Forty-one patients with pathologically confirmed rectal adenocarcinoma were included in this study. For preoperative staging, PET-CT and pelvic MRI with diffusion-weighted imaging were performed within one week (mean time interval, 3±1 day). For ADC measurements, the region of interest (ROI) was manually drawn along the border of each hyperintense tumor on b=1000 s/mm2 images. After repeating this procedure on each consecutive tumor-containing slice to cover the entire tumoral area, ROIs were copied to ADC maps. ADCmin was determined as the lowest ADC value among all ROIs in each tumor. For SUVmax measurements, whole-body images were assessed visually on transaxial, sagittal, and coronal images. ROIs were determined from the lesions observed on each slice, and SUVmax values were calculated automatically. The mean values of ADCmin and SUVmax were compared using Spearman's test. The mean ADCmin was 0.62±0.19×10-3 mm2/s (range, 0.368-1.227×10-3 mm2/s), the mean SUVmax was 20.07±9.3 (range, 4.3-49.5). A significant negative correlation was found between ADCmin and SUVmax (r=-0.347; P = 0.026). There was a significant negative correlation between the ADCmin and SUVmax values in rectal adenocarcinomas.
Liu, Tong; Hu, Liang; Ma, Chao; Wang, Zhi-Yan; Chen, Hui-Ling
2015-04-01
In this paper, a novel hybrid method, which integrates an effective filter maximum relevance minimum redundancy (MRMR) and a fast classifier extreme learning machine (ELM), has been introduced for diagnosing erythemato-squamous (ES) diseases. In the proposed method, MRMR is employed as a feature selection tool for dimensionality reduction in order to further improve the diagnostic accuracy of the ELM classifier. The impact of the type of activation functions, the number of hidden neurons and the size of the feature subsets on the performance of ELM have been investigated in detail. The effectiveness of the proposed method has been rigorously evaluated against the ES disease dataset, a benchmark dataset, from UCI machine learning database in terms of classification accuracy. Experimental results have demonstrated that our method has achieved the best classification accuracy of 98.89% and an average accuracy of 98.55% via 10-fold cross-validation technique. The proposed method might serve as a new candidate of powerful methods for diagnosing ES diseases.
Oberer, R.B.
2000-12-07
In an instrumented Cf-252 neutron source, it is desirable to distinguish fission events which produce neutrons from alpha decay events. A comparison of the maximum amplitude of a pulse from an alpha decay with the minimum amplitude of a fission pulse shows that the hemispherical configuration of the ion chamber is superior to the parallel-plate ion chamber.
2010-04-01
... homebuyer payment can a recipient charge a low-income rental tenant or homebuyer residing in housing units... Activities § 1000.124 What maximum and minimum rent or homebuyer payment can a recipient charge a low-income... charge a low-income rental tenant or homebuyer rent or homebuyer payments not to exceed 30 percent of...
Sa-Correia, I.; Van Uden, N.
1983-06-01
Difficulties experienced by brewers with yeast performance in the brewing of lager at low temperatures has led the authors to study the effect of ethanol on the minimum temperature for growth (T. min). It has been found that both the maximum temperature (T max) and T min were adversely affected by ethanol and that ethanol tolerance prevailed at intermediate temperatures. (Refs. 8).
Sung Woo Park; Byung Kwan Oh; Hyo Seon Park
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this...
Lei, Ying; Liu, Yu; Sun, Bo; Sun, Changfeng
2016-10-01
In this study, spruce tree rings from the southern slope of mid-Qinling Mountains were adopted to investigate the characteristics of average minimum temperatures during the past 138 years. Analysis showed that the interannual variability in radial growth of trees was positively correlated with the interannual variability of average minimum temperatures from previous December to current September ( VTM DS) in the study area during 1955-2010 ad. Based on the correlation analysis, the VTM DS were reconstructed for 1876-2013 ad with an explained variance of 42.5 % for the calibration period. Among the 22 dramatic changing years, extreme changes occurred more times when it was cooling, while the warming was comparatively gentle. Both the 10-year filtering of VTM DS series and the frequency of occurrences for those dramatic changing years showed a relatively stationary variation after the early 1950s. Over the last five decades, the accumulated VTM DS series showed an obvious warming trend, and the increase of the minimum temperature had contributed to the regional warming. The comparison of VTM DS and the dryness/wetness indices generally reflected cold-wet and warm-dry climate conditions in the study area. Significant positive correlations between the reconstructed VTM DS and the gridded minimum temperature indicated a regional representative of the temperature reconstruction, and positive correlations between VTM DS and sea surface temperature (SST) of the Indian Ocean and western Pacific regions suggested a possible linkage between the VTM DS variations and the Asian summer monsoon. Synchronous fluctuations in three tree-ring study series and connections of VTM DS with Arctic oscillation (AO) and El Niño-Southern Oscillation (ENSO) activities suggested that the minimum temperature variations in the TTH area responded sensitively to large-scale climate fluctuations and were the results of atmosphere-ocean interactions.
Lei, Ying; Liu, Yu; Sun, Bo; Sun, Changfeng
2016-10-01
In this study, spruce tree rings from the southern slope of mid-Qinling Mountains were adopted to investigate the characteristics of average minimum temperatures during the past 138 years. Analysis showed that the interannual variability in radial growth of trees was positively correlated with the interannual variability of average minimum temperatures from previous December to current September (VTM DS) in the study area during 1955-2010 AD. Based on the correlation analysis, the VTM DS were reconstructed for 1876-2013 AD with an explained variance of 42.5 % for the calibration period. Among the 22 dramatic changing years, extreme changes occurred more times when it was cooling, while the warming was comparatively gentle. Both the 10-year filtering of VTM DS series and the frequency of occurrences for those dramatic changing years showed a relatively stationary variation after the early 1950s. Over the last five decades, the accumulated VTM DS series showed an obvious warming trend, and the increase of the minimum temperature had contributed to the regional warming. The comparison of VTM DS and the dryness/wetness indices generally reflected cold-wet and warm-dry climate conditions in the study area. Significant positive correlations between the reconstructed VTM DS and the gridded minimum temperature indicated a regional representative of the temperature reconstruction, and positive correlations between VTM DS and sea surface temperature (SST) of the Indian Ocean and western Pacific regions suggested a possible linkage between the VTM DS variations and the Asian summer monsoon. Synchronous fluctuations in three tree-ring study series and connections of VTM DS with Arctic oscillation (AO) and El Niño-Southern Oscillation (ENSO) activities suggested that the minimum temperature variations in the TTH area responded sensitively to large-scale climate fluctuations and were the results of atmosphere-ocean interactions.
G. M. J. HASAN
2014-10-01
Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.
Favret, Eduardo A; Fuentes, Néstor O; Molina, Ana M; Setten, Lorena M
2008-10-01
During the last few years, RIMAPS technique has been used to characterize the micro-relief of metallic surfaces and recently also applied to biological surfaces. RIMAPS is an image analysis technique which uses the rotation of an image and calculates its average power spectrum. Here, it is presented as a tool for describing the morphology of the trichodium net found in some grasses, which is developed on the epidermal cells of the lemma. Three different species of grasses (herbarium samples) are analyzed: Podagrostis aequivalvis (Trin.) Scribn. & Merr., Bromidium hygrometricum (Nees) Nees & Meyen and Bromidium ramboi (Parodi) Rúgolo. Simple schemes representing the real microstructure of the lemma are proposed and studied. RIMAPS spectra of both the schemes and the real microstructures are compared. These results allow inferring how similar the proposed geometrical schemes are to the real microstructures. Each geometrical pattern could be used as a reference for classifying other species. Finally, this kind of analysis is used to determine the morphology of the trichodium net of Agrostis breviculmis Hitchc. As the dried sample had shrunk and the microstructure was not clear, two kinds of morphology are proposed for the trichodium net of Agrostis L., one elliptical and the other rectilinear, the former being the most suitable.
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrument on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.
Ngeow, Chow-Choong; Kanbur, Shashi M.; Bhardwaj, Anupam; Schrecengost, Zachariah; Singh, Harinder P.
2017-01-01
Investigation of period–color (PC) and amplitude–color (AC) relations at the maximum and minimum light can be used to probe the interaction of the hydrogen ionization front (HIF) with the photosphere and the radiation hydrodynamics of the outer envelopes of Cepheids and RR Lyraes. For example, theoretical calculations indicated that such interactions would occur at minimum light for RR Lyrae and result in a flatter PC relation. In the past, the PC and AC relations have been investigated by using either the (V ‑ R)MACHO or (V ‑ I) colors. In this work, we extend previous work to other bands by analyzing the RR Lyraes in the Sloan Digital Sky Survey Stripe 82 Region. Multi-epoch data are available for RR Lyraes located within the footprint of the Stripe 82 Region in five (ugriz) bands. We present the PC and AC relations at maximum and minimum light in four colors: (u ‑ g)0, (g ‑ r)0, (r ‑ i)0, and (i ‑ z)0, after they are corrected for extinction. We found that the PC and AC relations for this sample of RR Lyraes show a complex nature in the form of flat, linear or quadratic relations. Furthermore, the PC relations at minimum light for fundamental mode RR Lyrae stars are separated according to the Oosterhoff type, especially in the (g ‑ r)0 and (r ‑ i)0 colors. If only considering the results from linear regressions, our results are quantitatively consistent with the theory of HIF-photosphere interaction for both fundamental and first overtone RR Lyraes.
V R Durai; Rashmi Bhardwaj
2014-07-01
The output from Global Forecasting System (GFS) T574L64 operational at India Meteorological Department (IMD), New Delhi is used for obtaining location specific quantitative forecast of maximum and minimum temperatures over India in the medium range time scale. In this study, a statistical bias correction algorithm has been introduced to reduce the systematic bias in the 24–120 hour GFS model location specific forecast of maximum and minimum temperatures for 98 selected synoptic stations, representing different geographical regions of India. The statistical bias correction algorithm used for minimizing the bias of the next forecast is Decaying Weighted Mean (DWM), as it is suitable for small samples. The main objective of this study is to evaluate the skill of Direct Model Output (DMO) and Bias Corrected (BC) GFS for location specific forecast of maximum and minimum temperatures over India. The performance skill of 24–120 hour DMO and BC forecast of GFS model is evaluated for all the 98 synoptic stations during summer (May–August 2012) and winter (November 2012–February 2013) seasons using different statistical evaluation skill measures. The magnitude of Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for BC GFS forecast is lower than DMO during both summer and winter seasons. The BC GFS forecasts have higher skill score as compared to GFS DMO over most of the stations in all day-1 to day-5 forecasts during both summer and winter seasons. It is concluded from the study that the skill of GFS statistical BC forecast improves over the GFS DMO remarkably and hence can be used as an operational weather forecasting system for location specific forecast over India.
Saveljev, Vladimir; Kim, Sung-Kyu; Lee, Hyoung; Kim, Hyun-Woo; Lee, Byoungho
2016-02-08
The amplitude of the moiré patterns is estimated in relation to the opening ratio in line gratings and square grids. The theory is developed; the experimental measurements are performed. The minimum and the maximum of the amplitude are found. There is a good agreement between the theoretical and experimental data. This is additionally confirmed by the visual observation. The results can be applied to the image quality improvement in autostereoscopic 3D displays, to the measurements, and to the moiré displays.
Casati, Michele
2014-05-01
The global communication network and GPS satellites have enabled us to monitor for more than a decade, some of the more sensitive, well-known and highly urbanized volcanic areas around the world. The possibility of electromagnetic coupling between the dynamics of the Earth-Sun and major geophysical events is a topic of research. However the majority of researchers are orienting their research in one direction. They are attempting to demonstrate a significant EM coupling between the solar dynamics and terrestrial seismicity ignoring a possible relationship between solar dynamics and the dynamics inherent in volcanic calderas. The scientific references are scarce, however, a study conducted by the Vesuvius Observatory of Naples, notes that the seismic activity on the volcano is closely related to changes in solar activity and the Earth's magnetic field. We decided to extend the study to many other volcanic calderas in the world in order to generalise the relationship between solar activity and caldera activity and/or deformation of the ground. The list of Northern Hemisphere volcanoes examined is as follows: Long Valley, Yellowstone, Three sisters, Kilauea Hawaii, Axial seamount (United States); Augustine ( Alaska), Sakurajima (Japan); Hammarinn, Krisuvik; Askja (Iceland) and Campi Flegrei (Italy). We note that the deformation of volcanoes recorded in GPS logs varies in long, slow geodynamic processes related to the two well-known time periods within the eleven-year cycle of solar magnetic activity: the solar minimum and maximum. We find that the years of minimum (maximum), are coincident with the years in which transition between a phase of deflation (inflation) occurs. Additionally, the seismicity recorded in such areas reaches its peak in the years of solar minimum or maximum. However, the total number and magnitude of seismic events is greater during deep solar minima, than maxima, evidenced by increased seismic activity occurring between 2006 and 2010. This
MAXIMUM DISCLOSURE WITH MINIMUM DELAY
J Van R. du Preez
2012-02-01
Full Text Available In his treatment of the subject 'Die SA Weermag moet ook sy ander wapens effektief aanwend' in 7/1 issue of Militaria Colonel W. Otto regards it as incumbent on the South African Defence Force to make effective use of propaganda (in my book the corruption of the channels of communication.
Albuquerque, Fabio; Beier, Paul
2015-01-01
Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.
Al-Quwaiee, Hessa
2016-01-07
In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.
Leonardo W. T. Silva
2014-08-01
Full Text Available In launching operations, Rocket Tracking Systems (RTS process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs with phased arrays (PAs. These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs, the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs. For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence.
Sabitha Gauni
2014-03-01
Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.
Jeng, K-S; Huang, C-C; Lin, C-K; Lin, C-C; Chen, K-H
2013-06-01
Early detection of Budd-Chiari syndrome (BCS) to give the appropriate therapy in time is crucial. Angiography remains the golden standard to diagnose BCS. However, to establish the diagnosis of BCS in complicated cirrhotic patients remains a challenge. We used maximum intensity projection (Max IP) and minimum intensity projection (Min IP) from computed tomographic (CT) images to detect this syndrome in such a patient. A 55-year-old man with a history of chronic hepatitis B infection and alcoholism had undergone previously a left lateral segmentectomy for hepatic epitheloid angiomyolipoma (4.6 × 3.5 × 3.3 cm) with a concomitant splenectomy. Liver decompensation with intractable ascites and jaundice occurred 4 months later. The reformed images of the venous phase of enhanced CT images with Max IP and Min IP showed middle hepatic vein thrombosis. He then underwent a living-related donor liver transplantation with a right liver graft from his daughter. Intraoperatively, we noted thrombosis of his middle hepatic vein protruding into inferior vena cava. The postoperative course was unevenful. Microscopic findings revealed micronodular cirrhosis with mixed inflammation in the portal areas. Some liver lobules exhibited congestion and sinusoidal dilation compatible with venous occlusion clinically. We recommend Max IP and Min IP of CT images as simple and effective techniques to establish the diagnosis of BCS, especially in complicated cirrhotic patients, thereby avoiding invasive interventional procedures. Copyright © 2013 Elsevier Inc. All rights reserved.
Network float model of network planning and its maximum and minimum%网络计划的网络时差模型及其最值
刘琳; 李俊; 吴轶群
2011-01-01
Network float of network planning denotes sum of floats which could be consumed practically by each activity,and isn＇t the simple sum of floats in theory.Network float determines sum of the maximal reachable durations of all activities in pracrice under condition of fixed total duration,and is correlative close to cost of project engineering.The network float is variable,and is correlative to time parameters of each activity.It illuminates that value of the float could be decided by adjusting time parameters of activities,and then realizes optimization of cost;but variational extent of the float has not been confirmed now.In this paper,firstly,meanings of network float is analyzed from new visual angle;secondly,model of computing network float is founded base on above meanings,variational extent of the float viz.maximum and minimum are confirmed,and time parameters of each activity which should be satisfied for obtaining the maximun and minimum of network float are also confirmed;finally,case study is used to expound the application.%网络计划的网络时差表示各工序实际可用的机动时间的总和（不是理论上机动时间的简单加总）,即CPM网络计划的总机动时间,它决定着在总工期不变的前提下,所有工序实际可以达到的最大工期的总和,与工程项目的成本密切相关.网络时差是变量,与各工序的时间参数相关,说明可以通过调整工序的时间参数决定该时差的取值,进而实现成本优化,但目前还无法确定其变动幅度.首先从新的角度分析了网络时差的含义;然后,在此基础上建立了求解网络时差的模型,确定了网络时差的变动范围（即确定其最大值和最小值）,以及为了实现网络时差的最值各工序需满足的时间参数值;最后,通过算例阐述其应用.
Sürer Budak, Evrim; Toptaş, Tayfun; Aydın, Funda; Öner, Ali Ozan; Çevikol, Can; Şimşek, Tayup
2017-02-05
To explore the correlation of the primary tumor's maximum standardized uptake value (SUVmax) and minimum apparent diffusion coefficient (ADCmin) with clinicopathologic features, and to determine their predictive power in endometrial cancer (EC). A total of 45 patients who had undergone staging surgery after a preoperative evaluation with (18)F-fluorodeoxyglucose (FDG) positron emission tomography/computerized tomography (PET/CT) and diffusion-weighted magnetic resonance imaging (DW-MRI) were included in a prospective case-series study with planned data collection. Multiple linear regression analysis was used to determine the correlations between the study variables. The mean ADCmin and SUVmax values were determined as 0.72±0.22 and 16.54±8.73, respectively. A univariate analysis identified age, myometrial invasion (MI) and lymphovascular space involvement (LVSI) as the potential factors associated with ADCmin while it identified age, stage, tumor size, MI, LVSI and number of metastatic lymph nodes as the potential variables correlated to SUVmax. In multivariate analysis, on the other hand, MI was the only significant variable that correlated with ADCmin (p=0.007) and SUVmax (p=0.024). Deep MI was best predicted by an ADCmin cutoff value of ≤0.77 [93.7% sensitivity, 48.2% specificity, and 93.0% negative predictive value (NPV)] and SUVmax cutoff value of >20.5 (62.5% sensitivity, 86.2% specificity, and 81.0% NPV); however, the two diagnostic tests were not significantly different (p=0.266). Among clinicopathologic features, only MI was independently correlated with SUVmax and ADCmin. However, the routine use of (18)F-FDG PET/CT or DW-MRI cannot be recommended at the moment due to less than ideal predictive performances of both parameters.
Phan Thanh Noi
2016-12-01
Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.
Liu Jin
2012-01-01
Full Text Available Abstract Background To evaluate the accuracy of the combined maximum and minimum intensity projection-based internal target volume (ITV delineation in 4-dimensional (4D CT scans for liver malignancies. Methods 4D CT with synchronized IV contrast data were acquired from 15 liver cancer patients (4 hepatocellular carcinomas; 11 hepatic metastases. We used five approaches to determine ITVs: (1. ITVAllPhases: contouring gross tumor volume (GTV on each of 10 respiratory phases of 4D CT data set and combining these GTVs; (2. ITV2Phase: contouring GTV on CT of the peak inhale phase (0% phase and the peak exhale phase (50% and then combining the two; (3. ITVMIP: contouring GTV on MIP with modifications based on physician's visual verification of contours in each respiratory phase; (4. ITVMinIP: contouring GTV on MinIP with modification by physician; (5. ITV2M: combining ITVMIP and ITVMinIP. ITVAllPhases was taken as the reference ITV, and the metrics used for comparison were: matching index (MI, under- and over-estimated volume (Vunder and Vover. Results 4D CT images were successfully acquired from 15 patients and tumor margins were clearly discernable in all patients. There were 9 cases of low density and 6, mixed on CT images. After comparisons of metrics, the tool of ITV2M was the most appropriate to contour ITV for liver malignancies with the highest MI of 0.93 ± 0.04 and the lowest proportion of Vunder (0.07 ± 0.04. Moreover, tumor volume, target motion three-dimensionally and ratio of tumor vertical diameter over tumor motion magnitude in cranio-caudal direction did not significantly influence the values of MI and proportion of Vunder. Conclusion The tool of ITV2M is recommended as a reliable method for generating ITVs from 4D CT data sets in liver cancer.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
HE Fang; ZHANG BeiChen; HUANG DeHong
2012-01-01
Based on the ionosphere observation data obtained by EISCAT Svalbard Radar (ESR) in solar minimum year - 2007,we analyzed diumal variations of F2-peak electron density (NmF2) in four seasons under disturbed and quiet geomagnetic conditions.It indicated that the soft precipitation electron had an evident effect on the NmF2 increase at magnetic noon in spring.summer and autumn and the electron precipitation effects were prominent in winter.The comparison between the IRI-2007 model and the observation exhibited that the IRI (lntemational Reference Ionosphere) model had a better NmF2 prediction when the photoionization was dominant during the polar day.but worse when the electron precipitation was dominant during the polar night.We showed that the electrons in lower energy band decreased when the geomagnetic disturbance went greater.which resulted in the lower NmF2.By analyzing the spectrum of precipitation electron under different geomagnetic conditions,it was found that this phenomenon was induced by the energy flux enhancement of precipitation electron of low energy.
Kremser, S.; Bodeker, G. E.; Lewis, J.
2014-01-01
A Climate Pattern-Scaling Model (CPSM) that simulates global patterns of climate change, for a prescribed emissions scenario, is described. A CPSM works by quantitatively establishing the statistical relationship between a climate variable at a specific location (e.g. daily maximum surface temperature, Tmax) and one or more predictor time series (e.g. global mean surface temperature, Tglobal) - referred to as the "training" of the CPSM. This training uses a regression model to derive fit coefficients that describe the statistical relationship between the predictor time series and the target climate variable time series. Once that relationship has been determined, and given the predictor time series for any greenhouse gas (GHG) emissions scenario, the change in the climate variable of interest can be reconstructed - referred to as the "application" of the CPSM. The advantage of using a CPSM rather than a typical atmosphere-ocean global climate model (AOGCM) is that the predictor time series required by the CPSM can usually be generated quickly using a simple climate model (SCM) for any prescribed GHG emissions scenario and then applied to generate global fields of the climate variable of interest. The training can be performed either on historical measurements or on output from an AOGCM. Using model output from 21st century simulations has the advantage that the climate change signal is more pronounced than in historical data and therefore a more robust statistical relationship is obtained. The disadvantage of using AOGCM output is that the CPSM training might be compromised by any AOGCM inadequacies. For the purposes of exploring the various methodological aspects of the CPSM approach, AOGCM output was used in this study to train the CPSM. These investigations of the CPSM methodology focus on monthly mean fields of daily temperature extremes (Tmax and Tmin). The methodological aspects of the CPSM explored in this study include (1) investigation of the advantage
Shen, Tengming [Fermilab; Ye, Liyang [NCSU, Raleigh; Turrioni, Daniele [Fermilab; Li, Pei [Fermilab
2015-01-01
Small insert coils have been built using a multifilamentary Bi2Sr2CaCu2Ox round wire, and characterized in background fields to explore the quench behaviors and limits of Bi2Sr2CaCu2Ox superconducting magnets, with an emphasis on assessing the impact of slow normal zone propagation on quench detection. Using heaters of various lengths to initiate a small normal zone, a coil was quenched safely more than 70 times without degradation, with the maximum coil temperature reaching 280 K. Coils withstood a resistive voltage of tens of mV for seconds without quenching, showing the high stability of these coils and suggesting that the quench detection voltage shall be greater than 50 mV to not to falsely trigger protection. The hot spot temperature for the resistive voltage of the normal zone to reach 100 mV increases from ~40 K to ~80 K with increasing the operating wire current density Jo from 89 A/mm2 to 354 A/mm2 whereas for the voltage to reach 1 V, it increases from ~60 K to ~140 K, showing the increasing negative impact of slow normal zone propagation on quench detection with increasing Jo and the need to limit the quench detection voltage to < 1 V. These measurements, coupled with an analytical quench model, were used to access the impact of the maximum allowable voltage and temperature upon quench detection on the quench protection, assuming to limit the hot spot temperature to <300 K.
太阳活动极大和极小期太阳磁场周期变化研究%Periodicity Analysis of SMMF in Solar Maximum and Minimum
叶妮; 祝凤荣; 周雪梅; 贾焕玉
2012-01-01
Using the data observed by Wilcox Solar Observatory from 1975 to 2010, the short periodicity of the solar mean magnetic field (SMMF)in solar maximum and minimum is analyzed. The result shows that SMMF has main periods about 9 days, 13.5 days, and 27 days. During the solar maximum, the SMMF has the most dominant period near 27 days. However, in solar minimum, the 13.5-day periodicity is most significant except in 1984-1986. These results show that the solar active region distribution in the solar maximum is quite different from that in solar minimum.%利用Wilcox天文台1975年到2010年间的太阳磁场数据,分析了太阳平均磁场在太阳活动极大和极小时期的短时周期性.结果显示太阳磁场主要具有9d、13.5d、27d左右的周期.在太阳活动极大时期, 27d左右周期最为显著,而在太阳活动极小时期最显著的周期为13.5d左右(1984～1986年间的太阳活动极小时期除外).这些结果说明太阳的活动区域在活动极大和极小时期具有明显不同的分布.
Beloglazov, M. I.; Akhmetov, O. I.
2010-12-01
On the basis of the two-component measurements of the atmospheric noise electromagnetic field on the Kola Peninsula, a change in the first Schumann resonance (SR-1) as an indicator of global lightning formation is studied depending on the level of galactic cosmic rays (GCRs). It is found that the effect of GCRs is most evident during five months: in January and from September to December; in this case the SR-1 intensity in 2001 was higher than the level of 2007 by a factor of 1.5 and more. This effect almost disappears when the regime of the Northern Hemisphere changes into the summer regime. It is assumed that an increase in the GCR intensity results in an increase in the lightning occurrence frequency; however, the probability that the power of each lightning stroke decreases owing to an early disruption of the charge separation and accumulation processes in a thundercloud increases; on the contrary, a decrease in the GCR intensity decreases lightning stroke occurrence frequency and simultaneously increases the probability of accumulating a higher energy by a thundercloud and increasing the lightning power to the maximum possible values.
Ali Arkamose Assani
2016-10-01
Full Text Available Various manmade features (diversions, dredging, regulation, etc. have affected water levels in the Great Lakes and their outlets since the 19th century. The goal of this study is to analyze the impacts of such features on the stationarity and dependence between monthly mean maximum and minimum water levels in the Great Lakes and St. Lawrence River from 1919 to 2012. As far as stationarity is concerned, the Lombard method brought out shifts in mean and variance values of monthly mean water levels in Lake Ontario and the St. Lawrence River related to regulation of these waterbodies in the wake of the digging of the St. Lawrence Seaway in the mid-1950s. Water level shifts in the other lakes are linked to climate variability. As for the dependence between water levels, the copula method revealed a change in dependence mainly between Lakes Erie and Ontario following regulation of monthly mean maximum and minimum water levels in the latter. The impacts of manmade features primarily affected the temporal variability of monthly mean water levels in Lake Ontario.
Ohnaka, K.; Weigelt, G.; Hofmann, K.-H.
2017-01-01
Aims: Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at 2 R⋆. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) as well as high-spectral resolution long-baseline interferometric observations with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). Methods: We observed W Hya with VLT/SPHERE-ZIMPOL at three wavelengths in the continuum (645, 748, and 820 nm), in the Hα line at 656.3 nm, and in the TiO band at 717 nm. The VLTI/AMBER observations were carried out in the wavelength region of the CO first overtone lines near 2.3 μm with a spectral resolution of 12 000. Results: The high-spatial resolution polarimetric images obtained with SPHERE-ZIMPOL have allowed us to detect clear time variations in the clumpy dust clouds as close as 34-50 mas (1.4-2.0 R⋆) to the star. We detected the formation of a new dust cloud as well as the disappearance of one of the dust clouds detected at the first epoch. The Hα and TiO emission extends to 150 mas ( 6 R⋆), and the Hα images obtained at two epochs reveal time variations. The degree of linear polarization measured at minimum light, which ranges from 13 to 18%, is higher than that observed at pre-maximum light. The power-law-type limb-darkened disk fit to the AMBER data in the continuum results in a limb-darkened disk diameter of 49.1 ± 1.5 mas and a limb-darkening parameter of 1.16 ± 0.49, indicating that the atmosphere is more extended with weaker limb-darkening compared to pre-maximum light. Our Monte Carlo radiative transfer modeling shows that the second-epoch SPHERE-ZIMPOL data can be explained by a shell of 0.1 μm grains of Al2O3, Mg2SiO4, and MgSiO3 with a 550 nm optical depth of 0.6 ± 0.2 and an inner and outer radii of 1.3 R⋆ and 10 ± 2R⋆, respectively. Our modeling suggests the predominance of small (0
高艳普; 王向东; 王冬青
2015-01-01
An algorithm of maximum likelihood method for parameters estimate was presented aimed at multivariable controlled autoregressive moving average (CARMA-like).The algorithm transform the CARMA-like system into m identification models (m is the output numbers),each of which only had a parameter vector which needed to be esti-mated,and then through maximum likelihood method for estimating parameter vectors of each identification model,and all parameters estimate of the system were obtained.Simulation results verified the effectiveness of the proposed algo-rithm.%提出了一种针对多变量受控自回归滑动平均（controlled autoregressive moving average system-like，CARMA-like）系统的极大似然参数估计算法。将 CARMA-like 系统分解成为 m 个辨识模型（m 是输出量的个数），使每一个辨识模型仅包含一个需要估计的参数向量，通过极大似然方法估计每个辨识模型的参数向量，从而得到整个系统的参数估计值。仿真结果验证了该算法的有效性。
Mary Hokazono
Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.
Svendsen, Jon C; Tirsgaard, Bjørn; Cordero, Gerardo A; Steffensen, John F
2015-01-01
Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata), both axial swimmers, this study tested four hypotheses: (1) gait transition from steady to unsteady (i.e., burst-assisted) swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC); (2) variation in swimming performance (critical swimming speed; U crit) correlates with metabolic scope (MS) or anaerobic capacity (i.e., maximum EPOC); (3) there is a trade-off between maximum sustained swimming speed (U sus) and minimum cost of transport (COTmin); and (4) variation in U sus correlates positively with optimum swimming speed (U opt; i.e., the speed that minimizes energy expenditure per unit of distance traveled). Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e., EPOC) increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg(-1). Data are consistent with a previous study on striped surfperch (Embiotoca lateralis), a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between U crit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced U crit. We found no evidence of a trade-off between U sus and COTmin. In fact, data revealed significant negative correlations between U sus and COTmin, suggesting that individuals with high U sus also exhibit low COTmin. Finally, there were positive correlations between U sus and U opt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming
Jon Christian Svendsen
2015-02-01
Full Text Available Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata and Trinidadian guppy (Poecilia reticulata, both axial swimmers, this study tested four hypotheses: 1 gait transition from steady to unsteady (i.e. burst-assisted swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC; 2 variation in swimming performance (critical swimming speed; Ucrit correlates with metabolic scope (MS or anaerobic capacity (i.e. maximum EPOC; 3 there is a trade-off between maximum sustained swimming speed (Usus and minimum cost of transport (COTmin; and 4 variation in Usus correlates positively with optimum swimming speed (Uopt; i.e. the speed that minimizes energy expenditure per unit of distance travelled. Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e. EPOC increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg-1. Data are consistent with a previous study on striped surfperch (Embiotoca lateralis, a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between Ucrit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced Ucrit. We found no evidence of a trade-off between Usus and COTmin. In fact, data revealed significant negative correlations between Usus and COTmin, suggesting that individuals with high Usus also exhibit low COTmin. Finally, there were positive correlations between Usus and Uopt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming economy and optimum
Svendsen, Jon C.; Tirsgaard, Bjørn; Cordero, Gerardo A.; Steffensen, John F.
2015-01-01
Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata), both axial swimmers, this study tested four hypotheses: (1) gait transition from steady to unsteady (i.e., burst-assisted) swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC); (2) variation in swimming performance (critical swimming speed; Ucrit) correlates with metabolic scope (MS) or anaerobic capacity (i.e., maximum EPOC); (3) there is a trade-off between maximum sustained swimming speed (Usus) and minimum cost of transport (COTmin); and (4) variation in Usus correlates positively with optimum swimming speed (Uopt; i.e., the speed that minimizes energy expenditure per unit of distance traveled). Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e., EPOC) increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg−1. Data are consistent with a previous study on striped surfperch (Embiotoca lateralis), a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between Ucrit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced Ucrit. We found no evidence of a trade-off between Usus and COTmin. In fact, data revealed significant negative correlations between Usus and COTmin, suggesting that individuals with high Usus also exhibit low COTmin. Finally, there were positive correlations between Usus and Uopt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming economy and
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Ohnaka, Keiichi; Hofmann, Karl-Heinz
2016-01-01
Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at ~2 Rstar. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) in the continuum (645, 748, and 820 nm), in the Halpha line (656.3 nm), and in the TiO band (717 nm) as well as high-spectral resolution long-baseline interferometric observations in 2.3 micron CO lines with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). The high-spatial resolution polarimetric images have allowed us to detect clear time variations in the clumpy dust clouds as close as 34--50~mas (1.4--2.0 Rstar) to the star. We detected the formation of a new dust cloud and the disappearance of one of the dust clouds detected at the first epoch. The Halpha and TiO emission extends to ~150 mas (~6 Rstar), and the Halpha images reveal time variations. The degree of linear polarization is higher at mi...
Rozanov, E. V.; Schlesinger, M. E.; Egorova, T. A.; Li, B.; Andronova, N.; Zubov, V. A.
2004-01-01
The University of Illinois at Urbana-Champaign general circulation model with interactive photochemistry has been applied to estimate the changes in ozone, temperature and dynamics caused by the observed enhancement of the solar ultraviolet radiation during the 11-year solar activity cycle. Two 15-yearlong runs with spectral solar UV fluxes for the minimum and maximum solar activity cases have been performed. It was obtained that due to the imposed changes in spectral solar UV fluxes the annual-mean ozone mixing ratio increases 3% over the southern middle latitudes in the upper stratosphere and 2% in the northern lower stratosphere. The model also shows a statistically significant warming of 1.2 K in the stratosphere and an acceleration of the polar-night jets in both hemispheres. The most pronounced changes were found in November and March over the Northern Hemisphere and in September-October over the Southern Hemisphere. The magnitude and seasonal behavior of the simulated changes resemble the most robust features of the solar signal obtained from observational data analysis; however, they do not exactly coincide. The simulated zonal wind and temperature response during late fall to early spring contains the observed downward and poleward propagation of the solar signal, however its structure and phase are different from those observed. The response of the surface air temperature in December consists of warming over northern Europe, USA, and eastern Russia, and cooling over Greenland, Alaska, and central Asia. This pattern resembles the changes of the surface winter temperature after a major volcanic eruption. Model results for September-October show an intensification of ozone loss by up to 10% and expansion of the "ozone hole" toward South America.
雷苏娇; 李俊; 吴海博; 冯宗明
2014-01-01
Multipath routing is a new feature in CCN (Content-Centric Networking) which can be used to enhance the efifciency of network resources usage and balance network congestion. Based on the minimum cost maximum flow theory, we propose a multipath routing algorithm which aims to minimize delay and maximize bandwidth. It can choose different routing paths automatically according to the difference of network bandwidth environment and delay between links to achieve optimal bandwidth utilization of the entire network. The simulation experiment shows that our algorithm can reduce packet loss rate, decrease the bottleneck link load by approximately 60%, and alleviate network congestion.%在CCN (Content-Centric Networking，内容中心网络)中，多路径路由是一个新的特性，采用多路径路由可以更高效地利用网络资源，平衡网络拥塞。本文基于最小费用最大流理论，提出了一种适用于CCN网络的最小时延最大带宽多路径路由算法。该算法可以根据网络链路的带宽差异和链路的时延来选择不同的路由路径,达到整个网络的带宽最优利用。仿真实验表明，该算法与最短路径算法相比可以减少网络丢包，将瓶颈链路的负载量降低60%左右，缓解网络拥塞。
Wage and Labor Standards Administration (DOL), Washington, DC.
The 1966 amendments to the Fair Labor Standards Act extended enterprise coverage to all public and private educational institutions. In October 1968, one out of seven of the 2 million nonsupervisory nonteaching employees working in schools was paid below the $1.30 minimum wage which became effective on February 1, 1969. Three-fifths of those below…
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Social Security's special minimum benefit.
Olsen, K A; Hoffmeyer, D
Social Security's special minimum primary insurance amount (PIA) provision was enacted in 1972 to increase the adequacy of benefits for regular long-term, low-earning covered workers and their dependents or survivors. At the time, Social Security also had a regular minimum benefit provision for persons with low lifetime average earnings and their families. Concerns were rising that the low lifetime average earnings of many regular minimum beneficiaries resulted from sporadic attachment to the covered workforce rather than from low wages. The special minimum benefit was seen as a way to reward regular, low-earning workers without providing the windfalls that would have resulted from raising the regular minimum benefit to a much higher level. The regular minimum benefit was subsequently eliminated for workers reaching age 62, becoming disabled, or dying after 1981. Under current law, the special minimum benefit will phase out over time, although it is not clear from the legislative history that this was Congress's explicit intent. The phaseout results from two factors: (1) special minimum benefits are paid only if they are higher than benefits payable under the regular PIA formula, and (2) the value of the regular PIA formula, which is indexed to wages before benefit eligibility, has increased faster than that of the special minimum PIA, which is indexed to inflation. Under the Social Security Trustees' 2000 intermediate assumptions, the special minimum benefit will cease to be payable to retired workers attaining eligibility in 2013 and later. Their benefits will always be larger under the regular benefit formula. As policymakers consider Social Security solvency initiatives--particularly proposals that would reduce benefits or introduce investment risk--interest may increase in restoring some type of special minimum benefit as a targeted protection for long-term low earners. Two of the three reform proposals offered by the President's Commission to Strengthen
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Ludmila Bardin
2010-01-01
Full Text Available Desenvolveram-se, neste trabalho modelos de estimativa da temperatura do ar com base em fatores geográficos, visando estimar os valores máximos e mínimos médios mensais e anuais na região compreendida pelos municípios que compõem o Polo Turístico do Circuito das Frutas do Estado de São Paulo. Obtiveram-se equações de regressão múltipla em função da altitude, latitude e longitude e simples em função da altitude, cujos coeficientes de determinação variam entre 0,91 a 0,96, para as temperaturas máximas e 0,71 a 0,94 para as mínimas e se apresentam as variações espaciais das temperaturas máximas e mínimas médias mensais e anuais da região de estudo na forma de mapas.Multiple regression equations to estimate mean monthy and annual maximum and minimum temperatures were developed as a function of altitude, latitude, and longitude for the "Pólo Turístico do Circuito das Frutas" region. The obtained correlation coefficients varied from 0.91 to 0.96 and 0.71 to 0.94 of the maximum and minimum air temperature, respectively. Also, maps with the spacial variability of the maximum and minimum mean monthly and annual temperatures are presented for the region.
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Carlos Rogério de Mello
2010-04-01
Full Text Available Vazões máximas são grandezas hidrológicas aplicadas a projetos de obras hidráulicas e vazões mínimas são utilizadas para a avaliação das disponibilidades hídricas em bacias hidrográficas e comportamento do escoamento subterrâneo. Neste estudo, objetivou-se à construção de intervalos de confiança estatísticos para vazões máximas e mínimas diárias anuais e sua relação com as características fisiográficas das 6 maiores bacias hidrográficas da região Alto Rio Grande à montante da represa da UHE-Camargos/CEMIG. As distribuições de probabilidades Gumbel e Gama foram aplicadas, respectivamente, para séries históricas de vazões máximas e mínimas, utilizando os estimadores de Máxima Verossimilhança. Os intervalos de confiança constituem-se em uma importante ferramenta para o melhor entendimento e estimativa das vazões, sendo influenciado pelas características geológicas das bacias. Com base nos mesmos, verificou-se que a região Alto Rio Grande possui duas áreas distintas: a primeira, abrangendo as bacias Aiuruoca, Carvalhos e Bom Jardim, que apresentaram as maiores vazões máximas e mínimas, significando potencialidade para cheias mais significativas e maiores disponibilidades hídricas; a segunda, associada às bacias F. Laranjeiras, Madre de Deus e Andrelândia, que apresentaram as menores disponibilidades hídricas.Maximum discharges are applied to hydraulic structure design and minimum discharges are used to characterize water availability in hydrographic basins and subterranean flow. This study is aimed at estimating the confidence statistical intervals for maximum and minimum annual discharges and their relationship wih the physical characteristics of basins in the Alto Rio Grande Region, State of Minas Gerais. The study was developed for the six (6 greatest Alto Rio Grande Region basins at upstream of the UHE-Camargos/CEMIG reservoir. Gumbel and Gama probability distribution models were applied to the
A transcribed emergency record at minimum cost.
Klimt, C R; Becker, S; Fox, B S; Ensminger, F
1983-09-01
We have developed a new method of implementing a transcribed emergency record at minimum cost. Dictated emergency records are typed immediately by a transcriber located in the emergency department. This member of the medical record transcriber pool is given other non-urgent medical record material to type when there are no emergency records to type. The costs are reduced to the same level as routine medical records transcription. In 1982, 19,892 of the total 28,000 emergency records were transcribed by adding only 1.35 full-time equivalents (FTEs) to the transcriber pool. The remaining charts were handwritten because insufficient funds had been allocated to type all emergency records. The transcriber is capable of typing a maximum of 64 charts, averaging 13 lines (156 words) each, per 8-hour shift. The service can be phased in gradually as funds for transcribing the emergency record are allocated to the central transcriber pool.
The Second Law Today: Using Maximum-Minimum Entropy Generation
Umberto Lucia
2015-11-01
Full Text Available There are a great number of thermodynamic schools, independent of each other, and without a powerful general approach, but with a split on non-equilibrium thermodynamics. In 1912, in relation to the stationary non-equilibrium states, Ehrenfest introduced the fundamental question on the existence of a functional that achieves its extreme value for stable states, as entropy does for the stationary states in equilibrium thermodynamics. Today, the new branch frontiers of science and engineering, from power engineering to environmental sciences, from chaos to complex systems, from life sciences to nanosciences, etc. require a unified approach in order to optimize results and obtain a powerful approach to non-equilibrium thermodynamics and open systems. In this paper, a generalization of the Gouy–Stodola approach is suggested as a possible answer to the Ehrenfest question.
The Best Defense: Making Maximum Sense of Minimum Deterrence
2011-06-01
chief nuclear scientist, Homi Bhabha , looked to nuclear power as a means to fuel India‘s economic development and bring India to the upper echelon of...nuclear weapons. Homi Bhabha pressed Prime Minister Shastri to approve a nuclear test so that India could both showcase the...removed from being capable of conducting a nuclear test explosion.8 The untimely death of Homi Bhabha in 1966 and war with Pakistan in 1971 served to
Kernel maximum autocorrelation factor and minimum noise fraction transformations
Nielsen, Allan Aasbjerg
2010-01-01
) dimensional feature space via the kernel function and then performing a linear analysis in that space. Three examples show the very successful application of kernel MAF/MNF analysis to 1) change detection in DLR 3K camera data recorded 0.7 seconds apart over a busy motorway, 2) change detection...
董丹宏; 黄刚
2015-01-01
Based on daily maximum and minimum temperature data from 740 homogenized surface meteorological stations, the present study investigates the regional characteristics of the temperature trend and the dependence of maximum and minimum temperature and diurnal temperature range changes on the altitude during the period 1963–2012. It is found that the magnitude of minimum temperature increase is larger than that of the maximum temperature increase. The significant warming areas are located at high altitude, all of which increase remarkably in size during the study period. The maximum and minimum temperature and diurnal temperature range trends increase with altitude, except in spring. The correlation coefficients between the maximum temperature trend and altitude are the highest. At the same altitude, the amplitudes of maximum and minimum temperature show inconsistency: They exhibit increasing trends in the 1990s, with significant change at low altitude; they change minimally in the 1980s; and at high altitudes (above 2000 m), the magnitudes of their changes are weak before the 1990s but stronger in the last 10 years of the study period. The seasonal variability of the diurnal temperature range is large over 2000 m, decreasing in summer but increasing in winter. Before the 1990s, there is no significant variation between maximum and minimum temperature and altitude. However, their trends almost all decrease and then increase with altitude in the last 20 years. Additionally, the response to climate in highland areas is more sensitive than that in lowland areas.%本文利用中国740个气象台站1963～2012年均一化逐日最高温度和最低温度资料，分析了中国地区最高、最低气温和日较差变化趋势的区域特征及其与海拔高度的关系。结果表明：近50年气温的变化趋势无论是年或季节变化，最低温度的增温幅度都高于最高温度，且其增温显著区域都对应我国高海拔地区。除了春季，
Orme, John S.; Nobbs, Steven G.
1995-01-01
The minimum fuel mode of the NASA F-15 research aircraft is designed to minimize fuel flow while maintaining constant net propulsive force (FNP), effectively reducing thrust specific fuel consumption (TSFC), during cruise flight conditions. The test maneuvers were at stabilized flight conditions. The aircraft test engine was allowed to stabilize at the cruise conditions before data collection initiated; data were then recorded with performance seeking control (PSC) not-engaged, then data were recorded with the PSC system engaged. The maneuvers were flown back-to-back to allow for direct comparisons by minimizing the effects of variations in the test day conditions. The minimum fuel mode was evaluated at subsonic and supersonic Mach numbers and focused on three altitudes: 15,000; 30,000; and 45,000 feet. Flight data were collected for part, military, partial, and maximum afterburning power conditions. The TSFC savings at supersonic Mach numbers, ranging from approximately 4% to nearly 10%, are in general much larger than at subsonic Mach numbers because of PSC trims to the afterburner.
Intensity contrast of the average supergranule
Langfellner, J; Gizon, L
2016-01-01
While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Hyland, D. C.
1983-01-01
A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).
Parameterized Traveling Salesman Problem: Beating the Average
Gutin, G.; Patel, V.
2016-01-01
In the traveling salesman problem (TSP), we are given a complete graph Kn together with an integer weighting w on the edges of Kn, and we are asked to find a Hamilton cycle of Kn of minimum weight. Let h(w) denote the average weight of a Hamilton cycle of Kn for the weighting w. Vizing in 1973 asked
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
An Improved CO2-Crude Oil Minimum Miscibility Pressure Correlation
Hao Zhang
2015-01-01
Full Text Available Minimum miscibility pressure (MMP, which plays an important role in miscible flooding, is a key parameter in determining whether crude oil and gas are completely miscible. On the basis of 210 groups of CO2-crude oil system minimum miscibility pressure data, an improved CO2-crude oil system minimum miscibility pressure correlation was built by modified conjugate gradient method and global optimizing method. The new correlation is a uniform empirical correlation to calculate the MMP for both thin oil and heavy oil and is expressed as a function of reservoir temperature, C7+ molecular weight of crude oil, and mole fractions of volatile components (CH4 and N2 and intermediate components (CO2, H2S, and C2~C6 of crude oil. Compared to the eleven most popular and relatively high-accuracy CO2-oil system MMP correlations in the previous literature by other nine groups of CO2-oil MMP experimental data, which have not been used to develop the new correlation, it is found that the new empirical correlation provides the best reproduction of the nine groups of CO2-oil MMP experimental data with a percentage average absolute relative error (%AARE of 8% and a percentage maximum absolute relative error (%MARE of 21%, respectively.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Fingerprinting with Minimum Distance Decoding
Lin, Shih-Chun; Gamal, Hesham El
2007-01-01
This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\\approx 0.188$ is achievable using an ...
Rolling bearing feature frequency extraction using extreme average envelope decomposition
Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli
2016-09-01
The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.
Cosmic rays during the unusual solar minimum of 2009
Gil, Agnieszka
Examine the solar activity (SA) parameters during the quite long-lasting minimum epoch 23/24 shows that their values differ substantially in comparison with those measured in previous solar minimum epochs. The Sun was extremely quiet and there were nearly no sunspots (e.g. Smith, 2011). The averaged proton density was lower during this minimum (˜ 0.70) than in the three previous minimum epochs (Jian et al., 2011). The averaged strength of the interplanetary magnetic field during the last minimum was truly low (drop of ˜ 0.36) and the solar wind dynamic pressure decrease (˜ 0.22) was noticed (McComas et al., 2008). Solar polar magnetic fields were weaker (˜ 0.40) during this minimum in comparison with the last three minimum epochs of SA (Wang et al., 2009). Kirk et al. (2009) showed that EUV polar coronal holes area was less (˜ 0.15) than at the beginning of the Solar Cycle no. 23. The solar total irradiance at 1AU was lower more than 0.2Wm (-2) than in the last minimum in 1996 (Fröhlich, 2009). Values of the solar radio flux f10.7 were smaller than for the duration of the recent four minima (Jian et al., 2011). The tilt angle of the heliospheric current sheet declined much slower during the recent minimum in comparison with the previous two. The values of galactic cosmic rays (GCR) intensity measured by neutron monitors were the highest ever recorded (e.g. Moraal and Stoker, 2010). In 2007 neutron monitors achieved values measured during the last negative polarity minimum, 1987, and continued to grow throughout the beginning of 2010. In the same time, the level of anomalous cosmic ray intensities was comparable with the 1987 minimum (Leske et al., 2013). The average amplitude of the 27-days recurrence of the GCR intensity was as high as during the previous minimum epoch 1996 (positive polarity), much higher than during minimum one Hale cycle back (Gil et al., 2012). Modzelewska and Alania (2013) showed that 27-days periodicity of the GCR intensity stable
Young, Vershawn Ashanti
2004-01-01
"Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
The minimum work requirement for distillation processes
Yunus, Cerci; Yunus, A. Cengel; Byard, Wood [Nevada Univ., Las Vegas, NV (United States). Dept. of Mechanical Engineering
2000-07-01
A typical ideal distillation process is proposed and analyzed using the first and second-laws of thermodynamics with particular attention to the minimum work requirement for individual processes. The distillation process consists of an evaporator, a condenser, a heat exchanger, and a number of heaters and coolers. Several Carnot engines are also employed to perform heat interactions of the distillation process with the surroundings and determine the minimum work requirement for processes. The Carnot engines give the maximum possible work output or the minimum work input associated with the processes, and therefore the net result of these inputs and outputs leads to the minimum work requirement for the entire distillation process. It is shown that the minimum work relation for the distillation process is the same as the minimum work input relation found by Cerci et al [1] for an incomplete separation of incoming saline water, and depends only on the properties of the incoming saline water and the outgoing pure water and brine. Also, certain aspects of the minimum work relation found are discussed briefly. (authors)
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Effects of protocol design on lactate minimum power.
Johnson, M A; Sharpe, G R
2011-03-01
The aim of this investigation was to use a validated lactate minimum test protocol and evaluate whether blood lactate responses and the lactate minimum power are influenced by the starting power (study 1) and 1 min inter-stage rest intervals (study 2) during the incremental phase. Study 1: 8 subjects performed a lactate minimum test comprising a lactate elevation phase, recovery phase, and incremental phase comprising 5 continuous 4 min stages with starting power being 40% or 45% of the maximum power achieved during the lactate elevation phase, and with power increments of 5% maximum power. Study 2: 8 subjects performed 2 identical lactate minimum tests except that during one of the tests the incremental phase included 1 min inter-stage rest intervals. The lactate minimum power was lower when the incremental phase commenced at 40% (175±29 W) compared to 45% (184±30 W) maximum power (pvalidity and therefore training status evaluation and exercise prescription.
Negative Average Preference Utilitarianism
Roger Chao
2012-03-01
Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.
Knapp, C L; Stoffel, T L; Whitaker, S D
1980-10-01
Monthly averaged data is presented which describes the availability of solar radiation at 248 National Weather Service stations. Monthly and annual average daily insolation and temperature values have been computed from a base of 24 to 25 years of data. Average daily maximum, minimum, and monthly temperatures are provided for most locations in both Celsius and Fahrenheit. Heating and cooling degree-days were computed relative to a base of 18.3/sup 0/C (65/sup 0/F). For each station, global anti K/sub T/ (cloudiness index) were calculated on a monthly and annual basis. (MHR)
Simple and Three-Valued Simple Minimum Coloring Games
Musegaas, Marieke; Borm, Peter; Quant, Marieke
2015-01-01
In this paper minimum coloring games are considered. We characterize the type of conflict graphs inducing simple or three-valued simple minimum coloring games. We provide an upper bound on the number of maximum cliques of conflict graphs inducing such games. Moreover, a characterization of the core
Dose variation during solar minimum
Gussenhoven, M.S.; Mullen, E.G.; Brautigam, D.H. (Phillips Lab., Geophysics Directorate, Hanscom Air Force Base, MA (US)); Holeman, E. (Boston Univ., MA (United States). Dept. of Physics)
1991-12-01
In this paper, the authors use direct measurement of dose to show the variation in inner and outer radiation belt populations at low altitude from 1984 to 1987. This period includes the recent solar minimum that occurred in September 1986. The dose is measured behind four thicknesses of aluminum shielding and for two thresholds of energy deposition, designated HILET and LOLET. The authors calculate an average dose per day for each month of satellite operation. The authors find that the average proton (HILET) dose per day (obtained primarily in the inner belt) increased systematically from 1984 to 1987, and has a high anticorrelation with sunspot number when offset by 13 months. The average LOLET dose per day behind the thinnest shielding is produced almost entirely by outer zone electrons and varies greatly over the period of interest. If any trend can be discerned over the 4 year period it is a decreasing one. For shielding of 1.55 gm/cm{sup 2} (227 mil) Al or more, the LOLET dose is complicated by contributions from {gt} 100 MeV protons and bremsstrahlung.
Ensemble Averaged Gravity Theory
Khosravi, Nima
2016-01-01
We put forward the idea that all the theoretically consistent models of gravity have a contribution to the observed gravity interaction. In this formulation each model comes with its own Euclidean path integral weight where general relativity (GR) automatically has the maximum weight in high-curvature regions. We employ this idea in the framework of Lovelock models and show that in four dimensions the result is a specific form of $f(R,G)$ model. This specific $f(R,G)$ satisfies the stability conditions and has self-accelerating solution. Our model is consistent with the local tests of gravity since its behavior is same as GR for high-curvature regimes. In low-curvature regime the gravity force is weaker than GR which can interpret as existence of a repulsive fifth force for very large scales. Interestingly there is an intermediate-curvature regime where the gravity force is stronger in our model than GR. The different behavior of our model in comparison with GR in both low- and intermediate-curvature regimes ...
Independence, Odd Girth, and Average Degree
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...
On the maximum rate of change in sunspot number growth and the size of the sunspot cycle
Wilson, Robert M.
1990-01-01
Statistically significant correlations exist between the size (maximum amplitude) of the sunspot cycle and, especially, the maximum value of the rate of rise during the ascending portion of the sunspot cycle, where the rate of rise is computed either as the difference in the month-to-month smoothed sunspot number values or as the 'average rate of growth' in smoothed sunspot number from sunspot minimum. Based on the observed values of these quantities (equal to 10.6 and 4.63, respectively) as of early 1989, it is inferred that cycle 22's maximum amplitude will be about 175 + or - 30 or 185 + or - 10, respectively, where the error bars represent approximately twice the average error found during cycles 10-21 from the two fits.
Rising above the Minimum Wage.
Even, William; Macpherson, David
An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…
Cardinal, Jean; Joret, Gwenaël
2008-01-01
We study graph orientations that minimize the entropy of the in-degree sequence. The problem of finding such an orientation is an interesting special case of the minimum entropy set cover problem previously studied by Halperin and Karp [Theoret. Comput. Sci., 2005] and by the current authors [Algorithmica, to appear]. We prove that the minimum entropy orientation problem is NP-hard even if the graph is planar, and that there exists a simple linear-time algorithm that returns an approximate solution with an additive error guarantee of 1 bit. This improves on the only previously known algorithm which has an additive error guarantee of log_2 e bits (approx. 1.4427 bits).
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
DILEMATIKA PENETAPAN UPAH MINIMUM
. Pitaya
2015-02-01
Full Text Available In the effort of creating appropiate wage for employees, it is necessary to determine the wages by considering the increase of poverty without ignoring the increase of productivity, the progressivity of companies and the growth of economic. The new minimum wages in the provincial level and the regoinal/municipality level have been implemented per 1st January in Indonesia since 2001. The determination of minimum wage for provinvial level should be done 30 days before 1st January, whereas the determination of minimumwage for regional/municipality level should be done 40 days before 1st January. Moreover,there is an article which governs thet the minimumwage will be revised annually. By considering the time of determination and the time of revision above,it can be predicted that before and after the determination date will be crucial time. This is because the controversy among parties in industrial relationships will arise. The determination of minimum wage will always become a dilemmatic step which has to be done by the Government. Through this policy, on one side the government attempts to attract many investors, however, on the other side the government also has to protect the employees in order to have the appropiate wage in accordance with the standard of living.
Remarks on the Lower Bounds for the Average Genus
Yi-chao Chen
2011-01-01
Let G be a graph of maximum degree at most four. By using the overlap matrix method which is introduced by B. Mohar, we show that the average genus of G is not less than 1/3 of its maximum genus, and the bound is best possible. Also, a new lower bound of average genus in terms of girth is derived.
Minimum quality standards and exports
2015-01-01
This paper studies the interaction of a minimum quality standard and exports in a vertical product differentiation model when firms sell global products. If ex ante quality of foreign firms is lower (higher) than the quality of exporting firms, a mild minimum quality standard in the home market hinders (supports) exports. The minimum quality standard increases quality in both markets. A welfare maximizing minimum quality standard is always lower under trade than under autarky. A minimum quali...
Minimum Variance Portfolios in the Brazilian Equity Market
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design
Borcherdt, Roger D.
2015-01-01
Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.
Do Minimum Wages Fight Poverty?
David Neumark; William Wascher
1997-01-01
The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Chieh-Fan Chen
2011-01-01
Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Minimum wages, earnings, and migration
Boffy-Ramirez, Ernest
2013-01-01
Does increasing a state’s minimum wage induce migration into the state? Previous literature has shown mobility in response to welfare benefit differentials across states, yet few have examined the minimum wage as a cause of mobility...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Hazoglou, Michael J; Walther, Valentin; Dixit, Purushottam D; Dill, Ken A
2015-08-07
There has been interest in finding a general variational principle for non-equilibrium statistical mechanics. We give evidence that Maximum Caliber (Max Cal) is such a principle. Max Cal, a variant of maximum entropy, predicts dynamical distribution functions by maximizing a path entropy subject to dynamical constraints, such as average fluxes. We first show that Max Cal leads to standard near-equilibrium results—including the Green-Kubo relations, Onsager's reciprocal relations of coupled flows, and Prigogine's principle of minimum entropy production—in a way that is particularly simple. We develop some generalizations of the Onsager and Prigogine results that apply arbitrarily far from equilibrium. Because Max Cal does not require any notion of "local equilibrium," or any notion of entropy dissipation, or temperature, or even any restriction to material physics, it is more general than many traditional approaches. It also applicable to flows and traffic on networks, for example.
Chen, Chun-Hung; Wu, Ho-Ting; Ke, Kai-Wei
Simulations are often deployed to evaluate proposed mechanisms or algorithms in Mobile Ad Hoc Networks (MANET). In MANET, the impacts of some simulation parameters are noticeable, such as transmission range, data rate etc. However, the effect of mobility model is not clear until recently. Random Waypoint (RWP) is one of the most applied nodal mobility models in many simulations due to its clear procedures and easy employments. However, it exhibits the two major problems: decaying average speed and border effect. Both problems will overestimate the performance of the employed protocols and applications. Although many recently proposed mobility models are able to reduce or eliminate the above-mentioned problems, the concept of Diverse Average Speed (DAS) has not been introduced. DAS aims to provide different average speeds within the same speed range. In most mobility models, the average speed is decided when the minimum and maximum speeds are set. In this paper, we propose a novel mobility model, named General Ripple Mobility Model (GRMM). GRMM targets to provide a uniform nodal spatial distribution and DAS without decaying average speed. The simulations and analytic results have demonstrated the merits of the outstanding properties of the GRMM model.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Kavitha, Telikepalli; Nimbhorkar, Prajakta
2010-01-01
We consider an extension of the {\\em popular matching} problem in this paper. The input to the popular matching problem is a bipartite graph G = (A U B,E), where A is a set of people, B is a set of items, and each person a belonging to A ranks a subset of items in an order of preference, with ties allowed. The popular matching problem seeks to compute a matching M* between people and items such that there is no matching M where more people are happier with M than with M*. Such a matching M* is called a popular matching. However, there are simple instances where no popular matching exists. Here we consider the following natural extension to the above problem: associated with each item b belonging to B is a non-negative price cost(b), that is, for any item b, new copies of b can be added to the input graph by paying an amount of cost(b) per copy. When G does not admit a popular matching, the problem is to "augment" G at minimum cost such that the new graph admits a popular matching. We show that this problem is...
DETERMINING MINIMUM HIKING TIME USING DEM
ZSOLT MAGYARI-SÁSKA
2012-11-01
Full Text Available Determining minimum hiking time using DEM. Minimum hiking time calculus can be used to assess the maximum area where a lost person can be. Such area delimitation can help rescue teams to efficiently organize their missions. The two well known walking time rules was used to determine, compare and correlate the obtained result in a test area. The calculated times has a high correlation coefficient which makes possible a precise conversion between Naismith and Tobler walking times. For delimiting the rescue area a graph based modeling from a raster layer was implemented using R environment. The main challenge in such a modeling is the efficient memory management as the use of Dijkstra algorithm on directional costgraph requires high memory resources.
Physical Theories with Average Symmetry
Alamino, Roberto C.
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violat...
Average Convexity in Communication Situations
Slikker, M.
1998-01-01
In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Wilson, Robert M.; Hathaway, David H.
2008-01-01
For 1996 .2006 (cycle 23), 12-month moving averages of the aa geomagnetic index strongly correlate (r = 0.92) with 12-month moving averages of solar wind speed, and 12-month moving averages of the number of coronal mass ejections (CMEs) (halo and partial halo events) strongly correlate (r = 0.87) with 12-month moving averages of sunspot number. In particular, the minimum (15.8, September/October 1997) and maximum (38.0, August 2003) values of the aa geomagnetic index occur simultaneously with the minimum (376 km/s) and maximum (547 km/s) solar wind speeds, both being strongly correlated with the following recurrent component (due to high-speed streams). The large peak of aa geomagnetic activity in cycle 23, the largest on record, spans the interval late 2002 to mid 2004 and is associated with a decreased number of halo and partial halo CMEs, whereas the smaller secondary peak of early 2005 seems to be associated with a slight rebound in the number of halo and partial halo CMEs. Based on the observed aaM during the declining portion of cycle 23, RM for cycle 24 is predicted to be larger than average, being about 168+/-60 (the 90% prediction interval), whereas based on the expected aam for cycle 24 (greater than or equal to 14.6), RM for cycle 24 should measure greater than or equal to 118+/-30, yielding an overlap of about 128+/-20.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Sampling Based Average Classifier Fusion
Jian Hou
2014-01-01
fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.
2010-07-01
... flow rate Hourly 1×hour ✔ ✔ Minimum pressure drop across the wet scrubber or minimum horsepower or... scrubber followed by fabric filter Wet scrubber Dry scrubber followed by fabric filter and wet scrubber Maximum operating parameters: Maximum charge rate Continuous 1×hour ✔ ✔ ✔ Maximum fabric filter...
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
The periodicity of Grand Solar Minimum
Velasco Herrera, Victor Manuel
2016-07-01
The sunspot number is the most used index to quantify the solar activity. Nevertheless, the sunspot is a syn- thetic index and not a physical index. Therefore, we should be careful to use the sunspot number to quantify the low (high) solar activity. One of the major problems of using sunspot to quantify solar activity is that its minimum value is zero. This zero value hinders the reconstruction of the solar cycle during the Maunder minimum. All solar indexes can be used as analog signals, which can be easily converted into digital signals. In con- trast, the conversion of a digital signal into an analog signal is not in general a simple task. The sunspot number during the Maunder minimum can be studied as a digital signal of the solar activity In 1894, Maunder published a discovery that has maintained the Solar Physics in an impasse. In his fa- mous work on "A Prolonged Sunspot Minimum" Maunder wrote: "The sequence of maximum and minimum has, in fact, been unfailing during the present century [..] and yet there [..], the ordinary solar cycle was once interrupted, and one long period of almost unbroken quiescence prevailed". The search of new historical Grand solar minima has been one of the most important questions in Solar Physics. However, the possibility of estimating a new Grand solar minimum is even more valuable. Since solar activity is the result of electromagnetic processes; we propose to employ the power to quantify solar activity: this is a fundamental physics concept in electrodynamics. Total Solar Irradiance is the primary energy source of the Earth's climate system and therefore its variations can contribute to natural climate change. In this work, we propose to consider the fluctuations in the power of the Total Solar Irradiance as a physical measure of the energy released by the solar dynamo, which contributes to understanding the nature of "profound solar magnetic field in calm". Using a new reconstruction of the Total Solar Irradiance we found the
Physical Theories with Average Symmetry
Alamino, Roberto C
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Minimum signals in classical physics
邓文基; 许基桓; 刘平
2003-01-01
The bandwidth theorem for Fourier analysis on any time-dependent classical signal is shown using the operator approach to quantum mechanics. Following discussions about squeezed states in quantum optics, the problem of minimum signals presented by a single quantity and its squeezing is proposed. It is generally proved that all such minimum signals, squeezed or not, must be real Gaussian functions of time.
Proton transport properties of poly(aspartic acid) with different average molecular weights
Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)
2011-04-15
Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.
2012-09-13
Matrix produced by Wimer’s Algorithm # of Arcs j 1 2 3 . . . q 2 P1(2) P2(2) P3 (2) . . . Pq(2) 3 P1(3) P2(3) P3 (3) Pq(3) Node # u 4 P1(4) P2(4) P3 (4...Pq(4) ... ... . . . ... N P1(N) P2(N) P3 (N) . . . Pq(N) Assign another matrix Z, call each of its elements Zj(u), where each element is 25 Table 5...chooses ”contract” car- riers for long-term partnerships ; thus the need to model schedules is negated. Look at [10] for one detailed model of
Aini Hussain
2009-01-01
Full Text Available Problem statement: Electroencepharogram (EEG is an extremely complex signal with very low signal to noise ratio and these attributed to difficulty in analyzing the signal. Hence for detecting abnormal segment, a distinctive method is required to train the technologist to distinguish the anomalous in EEG data. The objective of this study was to create a framework to analyze EEG signals recorded from epileptic patients by evaluating the potential of UMACE filter to detect changes in single-channel EEG data during routine epilepsy monitoring. Approach: Normally, the peak to side lobe ratio (PSR of a UMACE filter was employed as an indicator if a test data is similar to an authentic class or vice versa, however in this study, the consistent changes of the correlation output known as Region Of Interest (ROI was plotted and monitored. Based on this approach, a novel method to analyze and distinguish variances in scalp EEG as well as comparing both normal and abnormal regions of the patients EEG was assessed. The performance of the novelty detection was examined based on the onset and end time of each seizure in the ROI plot. Results: Results showed that using ROI plot of variances one can distinguish irregularities in the EEG data. The advantage of the proposed technique was that it did not require large amount of data for training. Conclusion: As such, it was feasible to perform seizure analysis as well as localizing seizure onsets. In short, the technique can be used as a guideline for faster diagnosis in a lengthy EEG recording.
75 FR 78157 - Farmer and Fisherman Income Averaging
2010-12-15
... computing income tax liability. The regulations reflect changes made by the American Jobs Creation Act of 2004 and the Tax Extenders and Alternative Minimum Tax Relief Act of 2008. The regulations provide...) relating to the averaging of farm and fishing income in computing tax liability. A notice of proposed...
1990-11-01
findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate
Tehsin, Sara; Rehman, Saad; Awan, Ahmad B.; Chaudry, Qaiser; Abbas, Muhammad; Young, Rupert; Asif, Afia
2016-04-01
Sensitivity to the variations in the reference image is a major concern when recognizing target objects. A combinational framework of correlation filters and logarithmic transformation has been previously reported to resolve this issue alongside catering for scale and rotation changes of the object in the presence of distortion and noise. In this paper, we have extended the work to include the influence of different logarithmic bases on the resultant correlation plane. The meaningful changes in correlation parameters along with contraction/expansion in the correlation plane peak have been identified under different scenarios. Based on our research, we propose some specific log bases to be used in logarithmically transformed correlation filters for achieving suitable tolerance to different variations. The study is based upon testing a range of logarithmic bases for different situations and finding an optimal logarithmic base for each particular set of distortions. Our results show improved correlation and target detection accuracies.
Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.
2010-01-01
We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power, trunca
Quantized average consensus with delay
Jafarian, Matin; De Persis, Claudio
2012-01-01
Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co
On the Minimum Induced Drag of Wings
Bowers, Albion H.
2011-01-01
Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb.
Defining a Minimum End Mill Diameter
A. E. Dreval'
2015-01-01
Full Text Available Industrial observations show that the standard mill designs in many cases do not provide a complete diversity of manufacturing operations, and a lot of enterprises are forced to design and manufacture special (original designs of tools. The information search has revealed a lack of end mill diameter calculations in publications. There is a proposal to calculate the end mill diameter either by empirical formulas [2, 3], or by selection from the tables [4].To estimate a minimum diameter of the end mill to perform the specified manufacturing operations based on the mill body strength the formulas are obtained. The initial data for calculation are the flow sheet of milling operation and properties of processed and tool materials. The end mill is regarded, as a cantilevered beam of the circular cross section having Dс diameter (mill core diameter with overhang Lв from rigid fixing and loaded by the maximum bending force and torque.In deriving the formulas were used the following well-reasoned assumptions based on the analysed sizes of the structural elements of the standard mills: a diameter of mill core is linearly dependent on the mill diameter and the overhang; the 4τ 2 to σ 2 4τ2 ratio is constant and equal to 0.065 for contour milling and 0.17 for slot milling.The formulas for calculating the minimum diameter are as follows: 3 обр в 1 121 1.1 K S L L D m C z for contour milling; 3 обр в 1 207 1.1 K S L L D m C z for slot milling.Obtained dependences that allow defining a minimum diameter of the end mill in terms of ensuring its strength can be used to design mills for contour milling with radius transition sections, holes of different diameters in the body parts and other cases when for processing a singlemill is preferable.Using the proposed dependencies for calculating a feed of the maximum tolerable strength is reasonable in designing the mills for slots.Assumptions used in deriving
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...
Quantum Monte Carlo for minimum energy structures
Wagner, Lucas K
2010-01-01
We present an efficient method to find minimum energy structures using energy estimates from accurate quantum Monte Carlo calculations. This method involves a stochastic process formed from the stochastic energy estimates from Monte Carlo that can be averaged to find precise structural minima while using inexpensive calculations with moderate statistical uncertainty. We demonstrate the applicability of the algorithm by minimizing the energy of the H2O-OH- complex and showing that the structural minima from quantum Monte Carlo calculations affect the qualitative behavior of the potential energy surface substantially.
Izawa, Kazuhiro P.; Watanabe, Satoshi; Hirano, Yasuyuki; Matsushima, Shinya; Suzuki, Tomohiro; Oka, Koichiro; Kida, Keisuke; Suzuki, Kengo; Osada, Naohiko; Omiya, Kazuto; Brubaker, Peter H.; Shimizu, Hiroyuki; Akashi, Yoshihiro J.
2015-01-01
Abstract Maximum gait speed and physical activity (PA) relate to mortality and morbidity, but little is known about gender-related differences in these factors in elderly hospitalized cardiac inpatients. This study aimed to determine differences in maximum gait speed and daily measured PA based on sex and the relationship between these measures in elderly cardiac inpatients. A consecutive 268 elderly Japanese cardiac inpatients (mean age, 73.3 years) were enrolled and divided by sex into female (n = 75, 28%) and male (n = 193, 72%) groups. Patient characteristics and maximum gait speed, average step count, and PA energy expenditure (PAEE) in kilocalorie per day for 2 days assessed by accelerometer were compared between groups. Gait speed correlated positively with in-hospital PA measured by average daily step count (r = 0.46, P < 0.001) and average daily PAEE (r = 0.47, P < 0.001) in all patients. After adjustment for left ventricular ejection fraction, step counts and PAEE were significantly lower in females than males (2651.35 ± 1889.92 vs 4037.33 ± 1866.81 steps, P < 0.001; 52.74 ± 51.98 vs 99.33 ± 51.40 kcal, P < 0.001), respectively. Maximum gait speed was slower and PA lower in elderly female versus male inpatients. Minimum gait speed and step count values in this study might be minimum target values for elderly male and female Japanese cardiac inpatients. PMID:25789953
OCCURRENCE OF HIGH-SPEED SOLAR WIND STREAMS OVER THE GRAND MODERN MAXIMUM
Mursula, K.; Holappa, L. [ReSoLVE Centre of Excellence, Department of Physics, University of Oulu (Finland); Lukianova, R., E-mail: kalevi.mursula@oulu.fi [Geophysical Center of Russian Academy of Science, Moscow (Russian Federation)
2015-03-01
In the declining phase of the solar cycle (SC), when the new-polarity fields of the solar poles are strengthened by the transport of same-signed magnetic flux from lower latitudes, the polar coronal holes expand and form non-axisymmetric extensions toward the solar equator. These extensions enhance the occurrence of high-speed solar wind (SW) streams (HSS) and related co-rotating interaction regions in the low-latitude heliosphere, and cause moderate, recurrent geomagnetic activity (GA) in the near-Earth space. Here, using a novel definition of GA at high (polar cap) latitudes and the longest record of magnetic observations at a polar cap station, we calculate the annually averaged SW speeds as proxies for the effective annual occurrence of HSS over the whole Grand Modern Maximum (GMM) from 1920s onward. We find that a period of high annual speeds (frequent occurrence of HSS) occurs in the declining phase of each of SCs 16-23. For most cycles the HSS activity clearly reaches a maximum in one year, suggesting that typically only one strong activation leading to a coronal hole extension is responsible for the HSS maximum. We find that the most persistent HSS activity occurred in the declining phase of SC 18. This suggests that cycle 19, which marks the sunspot maximum period of the GMM, was preceded by exceptionally strong polar fields during the previous sunspot minimum. This gives interesting support for the validity of solar dynamo theory during this dramatic period of solar magnetism.
Minimum Redundancy Coding for Uncertain Sources
Baer, Michael B; Charalambous, Charalambos D
2011-01-01
Consider the set of source distributions within a fixed maximum relative entropy with respect to a given nominal distribution. Lossless source coding over this relative entropy ball can be approached in more than one way. A problem previously considered is finding a minimax average length source code. The minimizing players are the codeword lengths --- real numbers for arithmetic codes, integers for prefix codes --- while the maximizing players are the uncertain source distributions. Another traditional minimizing objective is the first one considered here, maximum (average) redundancy. This problem reduces to an extension of an exponential Huffman objective treated in the literature but heretofore without direct practical application. In addition to these, this paper examines the related problem of maximal minimax pointwise redundancy and the problem considered by Gawrychowski and Gagie, which, for a sufficiently small relative entropy ball, is equivalent to minimax redundancy. One can consider both Shannon-...
Gaussian moving averages and semimartingales
Basse-O'Connor, Andreas
2008-01-01
In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...
Split-plot fractional designs: Is minimum aberration enough?
Kulahci, Murat; Ramirez, Jose; Tobias, Randy
2006-01-01
Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....
Split-plot fractional designs: Is minimum aberration enough?
Kulahci, Murat; Ramirez, Jose; Tobias, Randy
2006-01-01
Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....
What causes geomagnetic activity during sunspot minimum
Kirov, Boian; Georgieva, Katya; Obridko, Vladimir
2014-01-01
The average geomagnetic activity during sunspot minimum has been continuously decreasing in the last four cycles. The geomagnetic activity is caused by both interplanetary disturbances - coronal mass ejections and high speed solar wind streams, and the background solar wind over which these disturbances ride. We show that the geomagnetic activity in cycle minimum does not depend on the number and parameters of coronal mass ejections or high speed solar wind streams, but on the background solar wind. The background solar wind has two components: slower and faster. The source of the slower component is the heliospheric current sheet, and of the faster one the polar coronal holes. It is supposed that the geomagnetic activity in cycle minimum is determined by the thickness of the heliospheric current sheet which is related to the portions of time the Earth spends in slow and in fast solar wind. We demonstrate that it is also determined by the parameters of these two components of the background solar wind which v...
Cook, Philip
2013-01-01
A minimum voting age is defended as the most effective and least disrespectful means of ensuring all members of an electorate are sufficiently competent to vote. Whilst it may be reasonable to require competency from voters, a minimum voting age should be rejected because its view of competence is unreasonably controversial, it is incapable of defining a clear threshold of sufficiency and an alternative test is available which treats children more respectfully. This alternative is a procedura...
FORMALIZATION OF MINIMUM POWER CONSUMPTION CRITERION FOR CRUSHING UNIT
M. Y. Shpurgalova
2014-01-01
Full Text Available Analytical expressions describing dependence between basic parameters of potash ore crushing process have been constructed in the paper. While taking in account the generality of the Kirpichev formula some corrections have been made for direct applicability of the given hypothesis in calculation of energy which is required for crushing of potash ore specimen. Such approach makes it possible to consider not only general averaged size of specimens but percentage content of every concrete specimen of the specified dimensions. While investigating potash ore composition of the prescribed volume it has been established that every component contained in the specimen composition has its tensile strength and elastic modulus. In addition to this it has been demonstrated the percentage content of components in potash ore composition (sylvinite, halite and insoluble residue is different.It has been experimentally determined that the selected volume of the material (2 м3 supplied for benefication and the final product have normal distribution of ore pieces, it means that number of averaged size pieces is higher than pieces of minimum and maximum sizes. An expression has been obtained on the basis of the executed investigations and formula that corresponds to the Kirpichev hypothesis. The expression makes it possible to calculate energy required for crushing of the specified volume of potash ore. In this case chemical composition and percentage content of components included in the potash ore have been taken into account. The energy required for crushing of the potash ore volume consists of total sum of energy used for crushing of separate components included in chemical composition of potash ore and this sum is multiplied by percentage content of corresponding substance.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.
Growth and maximum size of tiger sharks (Galeocerdo cuvier in Hawaii.
Carl G Meyer
Full Text Available Tiger sharks (Galecerdo cuvier are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL, with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W, in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km, after 366 days at liberty (DAL. We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured. We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Cycle Average Peak Fuel Temperature Prediction Using CAPP/GAMMA+
Tak, Nam-il; Lee, Hyun Chul; Lim, Hong Sik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
In order to obtain a cycle average maximum fuel temperature without rigorous efforts, a neutronics/thermo-fluid coupled calculation is needed with depletion capability. Recently, a CAPP/GAMMA+ coupled code system has been developed and the initial core of PMR200 was analyzed using the CAPP/GAMMA+ code system. The GAMMA+ code is a system thermo-fluid analysis code and the CAPP code is a neutronics code. The General Atomics proposed that the design limit of the fuel temperature under normal operating conditions should be a cycle-averaged maximum value. Nonetheless, the existing works of Korea Atomic Energy Research Institute (KAERI) only calculated the maximum fuel temperature at a fixed time point, e.g., the beginning of cycle (BOC) just because the calculation capability was not ready for a cycle average value. In this work, a cycle average maximum fuel temperature has been calculated using CAPP/GAMMA+ code system for the equilibrium core of PMR200. The CAPP/GAMMA+ coupled calculation was carried out for the equilibrium core of PMR 200 from BOC to EOC to obtain a cycle average peak fuel temperature. The peak fuel temperature was predicted to be 1372 .deg. C near MOC. However, the cycle average peak fuel temperature was calculated as 1181 .deg. C, which is below the design target of 1250 .deg. C.
Averaged Electroencephalic Audiometry in Infants
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Ergodic averages via dominating processes
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Job Opportunity average a CDC must... Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of one Job Opportunity per an amount of 504 loan funding that will be specified by SBA from time to...
Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Yan, Xiaozhen; Xie, Wu; Xu, Zhen
2016-12-01
Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy.
Minimum Expenses,Maximum Savings：How to Live in China Smartly
2011-01-01
For more information,please click www.echinacities.com While it＇s at least very annoying,and at most woefully erroneous that many Chinese people judge all foreigners to be totally minted,it＇s not hard to see why,when many foreigners are here living decadent lifestyles,partying on weekends（and weekdays）,travelling all over the country and mincing around town with Macbooks,iPods and Ray Bans.But then there are the secret ＂squirrelers＂
José Antonio Gutiérrez-Gallego
2015-01-01
Full Text Available Este artículo describe el diseño de un modelo de asignación de tráfico que predice flujos paracada segmento de una red urbana, con una mayor exactitud que el modelo tradicional de cuatroetapas, conservando además los orígenes y destinos de viaje. Los objetivos de investigación sondeterminar la intensidad de tráfico en áreas específicas de la red, e identificar los orígenes y destinosde los viajes para predecir cambios en la movilidad urbana. Para lograr estos objetivos, seutilizan bases de datos relacionales y un sistema de información geográfico con los que analizarla oferta de transporte (GIS-T. Este entorno de trabajo se completa con datos de entrevistas ahogares y encuestas de intercepción, para identificar los patrones de movilidad en la ciudad de tamaño medio de Mérida, España. Estos programas de aplicación pueden detectar cambios en lospatrones de movilidad y localizar áreas problemáticas. Los resultados obtenidos demuestran unalto grado de ajuste entre las predicciones y las observaciones de los viajes. Además, los nivelesde desagregación en cada sección del punto medio de la red combinada con el ajuste de datos depoblación mediante pirámides de población, evitan sesgos en las muestras de viaje.
Wage and Labor Standards Administration (DOL), Washington, DC.
This report describes the 1966 amendments to the Fair Labor Standards Act and summarizes the findings of three 1969 studies of the economic effects of these amendments. The studies found that economic growth continued through the third phase of the amendments, beginning February 1, 1969, despite increased wage and hours restrictions for recently…
The diversity-validity dilemma: In search of minimum adverse impact and maximum utility.
Callie Theron
2009-04-01
Full Text Available Selection from diverse groups of applicants poses the formidable challenge of developing valid selection procedures that simultaneously add value, do not discriminate unfairly and which minimise adverse impact. Valid selection procedures used in a fair, non-discriminatory manner that optimises utility, however, very often result in adverse impact against members of protected groups. More often than not, the assessment techniques used for selection are blamed for this. The conventional interpretation of adverse impact results in an erroneous diagnosis of the fundamental causes of the under-representation of protected group members and, consequently, in an inappropriate treatment of the problem.
"A minimum of urbanism and a maximum of ruralism": the Cuban experience.
Gugler, J
1980-01-01
The case of Cuba provides social scientists with reasonably good information on urbanization policies and their implementation in 1 developing country committed to socialism. The demographic context is considered, and Cuban efforts to eliminate the rural-urban contradiction and to redefine the role of Havana are described. The impact of these policies is analyzed in terms of available data on urbanization patterns since January 1959 when the revolutionaries marched into Havana. Prerevolutionary urbanization trends are considered. Fertility in Cuba has declined simultaneously with mortality and even more rapidly. Projections assume a 1.85% annual growth rate, resulting in a population of nearly 15 million by the year 2000. Any estimate regarding the future trend in population growth must depend on prognosis of general living conditions and of specific government policies regarding contraception, abortion, female labor force participation, and child care facilities. If population growth in Cuba has been substantial, but less dramatic than that of many other developing countries, urban growth presents a similar picture. Cuba's highest rate of growth of the population living in urban centers with a population over 20,000, in any intercensal period during the 20th century, was 4.1%/year for 1943-1953. It dropped to 3.0% in the 1953-1970 period. Government policies achieved a measure of success in stemming the tide of rural-urban migration, but the aims of the revolutionary leadership went further. The objective was for urban dwellers to be involved in agriculture, and the living standards of the rural population were to be raised to approximate those of city dwellers. The goal of "urbanizing" the countryside found expression in a program designed to construct new small towns which could more easily be provided with services. A slowdown in the growth of Havana, and the concomitant weakening of its dominant position, was intended by the revolutionary leadership. Offical policies have been enunciated that connect the reduction in the dominance of Havana with the slowdown in urban growth and the urbanization of the countryside. Evidence is presented which suggests achievements along all of these dimensions, but by 1970 they were, as yet, quite limited.
Blasques, José Pedro Albergaria Amaral; Stolpe, Mathias
2011-01-01
and cross section geometry. The resulting finite element matrices are significantly smaller than those obtained using equivalent finite element models. This modeling approach is therefore an attractive alternative in computationally intensive applications at the conceptual design stage where the focus...
The Round-Robin Mock Interview: Maximum Learning in Minimum Time
Marks, Melanie; O'Connor, Abigail H.
2006-01-01
Interview skills is critical to a job seeker's success in obtaining employment. However, learning interview skills takes time. This article offers an activity for providing students with interview practice while sacrificing only a single classroom period. The authors begin by reviewing relevant literature. Then, they outline the process of…
Gurung, Prabin
2015-01-01
The thesis was written in order to find workable ideas and techniques of ecotourism for sustainable development and to find out the importance of ecotourism. It illustrates how ecotourism can play a beneficial role to visitors and local people. The thesis was based on ecotourism and its impact, the case study was Sauraha and Chitwan National Park. How ecotourism can be fruitful to local residents and nature, what are the drawbacks of ecotourism? Ecotourism also has negative impacts on both th...
School violence: effective response protocols for maximum safety and minimum liability.
Miller, Laurence
2007-01-01
Despite the recent preoccupation with terrorism, most Americans are still killed by our own citizens, and school violence continues to be a significant source of mortality and trauma. This article describes the basic facts, features, and dynamics of school violence and presents a prevention, response, and recovery protocol adapted from the related field of workplace violence. This model may be used by educators, law enforcement professionals, and mental health clinicians in their collaborative efforts to make our academic institutions safer and healthier places to learn.
Minimum Q Electrically Small Antennas
Kim, O. S.
2012-01-01
for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q.......Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for the stored energies obtained through the vector spherical wave theory, it is shown that a magnetic-coated metal core reduces the internal stored energy of both TM1m and TE1m modes simultaneously, so that a self-resonant antenna with the Q approaching the fundamental minimum is created. Numerical results...
WANG Yi; TAO Xiao-feng
2007-01-01
In this article, an inter-antenna inter-subblock shifting and inversion (IASSI) scheme is proposed to reduce the peak-to-average power ratio (PAPR) in multi-input multi- output orthogonal frequency division multiplexing (MIMO- OFDM) systems. It exploits multiple antennas and subblocks to provide additional degrees of freedom to benefit the system. To reduce the implementation complexity of the proposed scheme, two simple suboptimal schemes are further presented based on the minimum current maximum criterion; one adopts sequential search and the other employs random binary grouping. The simulation results exhibit the effectiveness of these proposed schemes.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Increasing the weight of minimum spanning trees
Frederickson, G.N.; Solis-Oba, R. [Purdue Univ., West Lafayette, IN (United States)
1996-12-31
Given an undirected connected graph G and a cost function for increasing edge weights, the problem of determining the maximum increase in the weight of the minimum spanning trees of G subject to a budget constraint is investigated. Two versions of the problem are considered. In the first, each edge has a cost function that is linear in the weight increase. An algorithm is presented that solves this problem in strongly polynomial time. In the second version, the edge weights are fixed but an edge can be removed from G at a unit cost. This version is shown to be NP-hard. An {Omega}(1/ log k)-approximation algorithm is presented for it, where k is the number of edges to be removed.
Maximum Coronal Mass Ejection Speed as an Indicator of Solar and Geomagnetic Activities
Kilcik, A; Abramenko, V; Goode, P R; Gopalswamy, N; Ozguc, A; Rozelot, J P; 10.1088/0004-637X/727/1/44
2011-01-01
We investigate the relationship between the monthly averaged maximal speeds of coronal mass ejections (CMEs), international sunspot number (ISSN), and the geomagnetic Dst and Ap indices covering the 1996-2008 time interval (solar cycle 23). Our new findings are as follows. (1) There is a noteworthy relationship between monthly averaged maximum CME speeds and sunspot numbers, Ap and Dst indices. Various peculiarities in the monthly Dst index are correlated better with the fine structures in the CME speed profile than that in the ISSN data. (2) Unlike the sunspot numbers, the CME speed index does not exhibit a double peak maximum. Instead, the CME speed profile peaks during the declining phase of solar cycle 23. Similar to the Ap index, both CME speed and the Dst indices lag behind the sunspot numbers by several months. (3) The CME number shows a double peak similar to that seen in the sunspot numbers. The CME occurrence rate remained very high even near the minimum of the solar cycle 23, when both the sunspot ...
High average power supercontinuum sources
J C Travers
2010-11-01
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.
Dependability in Aggregation by Averaging
Jesus, Paulo; Almeida, Paulo Sérgio
2010-01-01
Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...
Measuring Complexity through Average Symmetry
Alamino, Roberto C.
2015-01-01
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Minimum Thermal Conductivity of Superlattices
Simkin, M. V.; Mahan, G. D.
2000-01-31
The phonon thermal conductivity of a multilayer is calculated for transport perpendicular to the layers. There is a crossover between particle transport for thick layers to wave transport for thin layers. The calculations show that the conductivity has a minimum value for a layer thickness somewhat smaller then the mean free path of the phonons. (c) 2000 The American Physical Society.
Minimum aanlandingsmaat Brasem (Abramis brama)
Hal, van R.; Miller, D.C.M.
2016-01-01
Ter ondersteuning van een besluit aangaande een minimum aanlandingsmaat voor brasem, primair voor het IJsselmeer en Markermeer, heeft het ministerie van Economische Zaken IMARES verzocht een overzicht te geven van aanlandingsmaten voor brasem in andere landen en waar mogelijk de motivatie achter dez
Coupling between minimum scattering antennas
Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans
1974-01-01
Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...
Thermospheric density model biases at the 23rd sunspot maximum
Pardini, C.; Moe, K.; Anselmo, L.
2012-07-01
Uncertainties in the neutral density estimation are the major source of aerodynamic drag errors and one of the main limiting factors in the accuracy of the orbit prediction and determination process at low altitudes. Massive efforts have been made over the years to constantly improve the existing operational density models, or to create even more precise and sophisticated tools. Special attention has also been paid to research more appropriate solar and geomagnetic indices. However, the operational models still suffer from weakness. Even if a number of studies have been carried out in the last few years to define the performance improvements, further critical assessments are necessary to evaluate and compare the models at different altitudes and solar activity conditions. Taking advantage of the results of a previous study, an investigation of thermospheric density model biases during the last sunspot maximum (October 1999 - December 2002) was carried out by analyzing the semi-major axis decay of four satellites: Cosmos 2265, Cosmos 2332, SNOE and Clementine. Six thermospheric density models, widely used in spacecraft operations, were analyzed: JR-71, MSISE-90, NRLMSISE-00, GOST-2004, JB2006 and JB2008. During the time span considered, for each satellite and atmospheric density model, a fitted drag coefficient was solved for and then compared with the calculated physical drag coefficient. It was therefore possible to derive the average density biases of the thermospheric models during the maximum of the 23rd solar cycle. Below 500 km, all the models overestimated the average atmospheric density by amounts varying between +7% and +20%. This was an inevitable consequence of constructing thermospheric models from density data obtained by assuming a fixed drag coefficient, independent of altitude. Because the uncertainty affecting the drag coefficient measurements was about 3% at both 200 km and 480 km of altitude, the calculated air density biases below 500 km were
Alderson, Tim L.; Svenja Huntemann
2013-01-01
Singleton-type upper bounds on the minimum Lee distance of general (not necessarily linear) Lee codes over ℤq are discussed. Two bounds known for linear codes are shown to also hold in the general case, and several new bounds are established. Codes meeting these bounds are investigated and in some cases characterised.
Averaging and sampling for magnetic-observatory hourly data
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
Mirror averaging with sparsity priors
Dalalyan, Arnak
2010-01-01
We consider the problem of aggregating the elements of a (possibly infinite) dictionary for building a decision procedure, that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given procedure can be tested. In a fairly general set-up, we establish an oracle inequality for the Mirror Averaging aggregate based on any prior distribution. This oracle inequality is applied in the context of sparse coding for different problems of statistics and machine learning such as regression, density estimation and binary classification.
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Minimum airflow reset of single-duct VAV terminal boxes
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS
Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert
2003-05-01
A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.
Understanding the Minimum Wage: Issues and Answers.
Employment Policies Inst. Foundation, Washington, DC.
This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Quantum mechanics the theoretical minimum
Susskind, Leonard
2014-01-01
From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.
Kwee, R E; The ATLAS collaboration
2010-01-01
Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp-collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.09 < |eta| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presen...
Minimum thickness anterior porcelain restorations.
Radz, Gary M
2011-04-01
Porcelain laminate veneers (PLVs) provide the dentist and the patient with an opportunity to enhance the patient's smile in a minimally to virtually noninvasive manner. Today's PLV demonstrates excellent clinical performance and as materials and techniques have evolved, the PLV has become one of the most predictable, most esthetic, and least invasive modalities of treatment. This article explores the latest porcelain materials and their use in minimum thickness restoration.
Minimum feature size preserving decompositions
Aloupis, Greg; Demaine, Martin L; Dujmovic, Vida; Iacono, John
2009-01-01
The minimum feature size of a crossing-free straight line drawing is the minimum distance between a vertex and a non-incident edge. This quantity measures the resolution needed to display a figure or the tool size needed to mill the figure. The spread is the ratio of the diameter to the minimum feature size. While many algorithms (particularly in meshing) depend on the spread of the input, none explicitly consider finding a mesh whose spread is similar to the input. When a polygon is partitioned into smaller regions, such as triangles or quadrangles, the degradation is the ratio of original to final spread (the final spread is always greater). Here we present an algorithm to quadrangulate a simple n-gon, while achieving constant degradation. Note that although all faces have a quadrangular shape, the number of edges bounding each face may be larger. This method uses Theta(n) Steiner points and produces Theta(n) quadrangles. In fact to obtain constant degradation, Omega(n) Steiner points are required by any al...
Ensemble average theory of gravity
Khosravi, Nima
2016-12-01
We put forward the idea that all the theoretically consistent models of gravity have contributions to the observed gravity interaction. In this formulation, each model comes with its own Euclidean path-integral weight where general relativity (GR) has automatically the maximum weight in high-curvature regions. We employ this idea in the framework of Lovelock models and show that in four dimensions the result is a specific form of the f (R ,G ) model. This specific f (R ,G ) satisfies the stability conditions and possesses self-accelerating solutions. Our model is consistent with the local tests of gravity since its behavior is the same as in GR for the high-curvature regime. In the low-curvature regime the gravitational force is weaker than in GR, which can be interpreted as the existence of a repulsive fifth force for very large scales. Interestingly, there is an intermediate-curvature regime where the gravitational force is stronger in our model compared to GR. The different behavior of our model in comparison with GR in both low- and intermediate-curvature regimes makes it observationally distinguishable from Λ CDM .
A discussion on maximum entropy production and information theory
Bruers, Stijn [Instituut voor Theoretische Fysica, Celestijnenlaan 200D, Katholieke Universiteit Leuven, B-3001 Leuven (Belgium)
2007-07-06
We will discuss the maximum entropy production (MaxEP) principle based on Jaynes' information theoretical arguments, as was done by Dewar (2003 J. Phys. A: Math. Gen. 36 631-41, 2005 J. Phys. A: Math. Gen. 38 371-81). With the help of a simple mathematical model of a non-equilibrium system, we will show how to derive minimum and maximum entropy production. Furthermore, the model will help us to clarify some confusing points and to see differences between some MaxEP studies in the literature.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Applicability of the minimum entropy generation method for optimizing thermodynamic cycles
Cheng Xue-Tao; Liang Xin-Gang
2013-01-01
Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations.In this paper,it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed.For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates,it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered.However,the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included,because the total heat into the system of interest is not fixed.An irreversible Carnot cycle and an irreversible Brayton cycle are analysed.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed.
Ceramic veneers with minimum preparation.
da Cunha, Leonardo Fernandes; Reis, Rachelle; Santana, Lino; Romanini, Jose Carlos; Carvalho, Ricardo Marins; Furuse, Adilson Yoshio
2013-10-01
The aim of this article is to describe the possibility of improving dental esthetics with low-thickness glass ceramics without major tooth preparation for patients with small to moderate anterior dental wear and little discoloration. For this purpose, a carefully defined treatment planning and a good communication between the clinician and the dental technician helped to maximize enamel preservation, and offered a good treatment option. Moreover, besides restoring esthetics, the restorative treatment also improved the function of the anterior guidance. It can be concluded that the conservative use of minimum thickness ceramic laminate veneers may provide satisfactory esthetic outcomes while preserving the dental structure.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
How minimum detectable displacement in a GNSS Monitoring Network change?
Hilmi Erkoç, Muharrem; Doǧan, Uǧur; Aydın, Cüneyt
2016-04-01
The minimum detectable displacement in a geodetic monitoring network shows the displacement magnitude which may be just discriminated with known error probabilities. This displacement, which is originally deduced from sensitivity analysis, depends on network design, observation accuracy, datum of the network, direction of the displacement and power of the statistical test used for detecting the displacements. One may investigate how different scenarios on network design and observation accuracies influence the minimum detectable displacements for the specified datum, a-priorly forecasted directions and assumed power of the test and decide which scenario is the best or most optimum. It is sometimes difficult to forecast directions of the displacements. In that case, the minimum detectable displacements in a geodetic monitoring network are derived on the eigen-directions associated with the maximum eigen-values of the network stations. This study investigates how minimum detectable displacements in a GNSS monitoring network change depending on the accuracies of the network stations. For this, CORS-TR network in Turkey with 15 stations (a station fixed) is used. The data with 4h, 6h, 12 h and 24 h observing session duration in three sequential days of 2011, 2012 and 2013 were analyzed with Bernese 5.2 GNSS software. The repeatabilities of the daily solutions belonging to each year were analyzed carefully to scale the Bernese cofactor matrices properly. The root mean square (RMS) values for daily repeatability with respect to the combined 3-day solution are computed (the RMS values are generally less than 2 mm in the horizontal directions (north and east) and < 5 mm in the vertical direction for 24 h observing session duration). With the obtained cofactor matrices for these observing sessions, the minimum detectable displacements along the (maximum) eigen directions are compared each other. According to these comparisons, more session duration less minimum detectable
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Perkell, J S; Hillman, R E; Holmberg, E B
1994-08-01
In previous reports, aerodynamic and acoustic measures of voice production were presented for groups of normal male and female speakers [Holmberg et al., J. Acoust. Soc. Am. 84, 511-529 (1988); J. Voice 3, 294-305 (1989)] that were used as norms in studies of voice disorders [Hillman et al., J. Speech Hear. Res. 32, 373-392 (1989); J. Voice 4, 52-63 (1990)]. Several of the measures were extracted from glottal airflow waveforms that were derived by inverse filtering a high-time-resolution oral airflow signal. Recently, the methods have been updated and a new study of additional subjects has been conducted. This report presents previous (1988) and current (1993) group mean values of sound pressure level, fundamental frequency, maximum airflow declination rate, ac flow, peak flow, minimum flow, ac-dc ratio, inferred subglottal air pressure, average flow, and glottal resistance. Statistical tests indicate overall group differences and differences for values of several individual parameters between the 1988 and 1993 studies. Some inter-study differences in parameter values may be due to sampling effects and minor methodological differences; however, a comparative test of 1988 and 1993 inverse filtering algorithms shows that some lower 1988 values of maximum flow declination rate were due at least in part to excessive low-pass filtering in the 1988 algorithm. The observed differences should have had a negligible influence on the conclusions of our studies of voice disorders.
Chang, Yin-Jung
2014-01-13
The investigation of optimum optical designs of interlayers and antireflection (AR) coating for achieving maximum average transmittance (T(ave)) into the CuIn(1-x)Ga(x)Se2 (CIGS) absorber of a typical CIGS solar cell through the suppression of lossy-film-induced angular mismatches is described. Simulated-annealing algorithm incorporated with rigorous electromagnetic transmission-line network approach is applied with criteria of minimum average reflectance (R(ave)) from the cell surface or maximum T(ave) into the CIGS absorber. In the presence of one MgF2 coating, difference in R(ave) associated with optimum designs based upon the two distinct criteria is only 0.3% under broadband and nearly omnidirectional incidence; however, their corresponding T(ave) values could be up to 14.34% apart. Significant T(ave) improvements associated with the maximum-T(ave)-based design are found mainly in the mid to longer wavelengths and are attributed to the largest suppression of lossy-film-induced angular mismatches over the entire CIGS absorption spectrum. Maximum-T(ave)-based designs with a MgF2 coating optimized under extreme deficiency of angular information is shown, as opposed to their minimum-R(ave)-based counterparts, to be highly robust to omnidirectional incidence.
Asymmetric k-Center with Minimum Coverage
Gørtz, Inge Li
2008-01-01
In this paper we give approximation algorithms and inapproximability results for various asymmetric k-center with minimum coverage problems. In the k-center with minimum coverage problem, each center is required to serve a minimum number of clients. These problems have been studied by Lim et al. [A....... Lim, B. Rodrigues, F. Wang, Z. Xu, k-center problems with minimum coverage, Theoret. Comput. Sci. 332 (1–3) (2005) 1–17] in the symmetric setting....
An improved approximation ratio for the minimum latency problem
Goemans, M.; Kleinberg, J. [MIT, Cambridge, MA (United States)
1996-12-31
Given a tour visiting n points in a metric space, the latency of one of these points p is the distance traveled in the tour before reaching p. The minimum latency problem asks for a tour passing through n given points for which the total latency of the n points is minimum; in effect, we are seeking the tour with minimum average {open_quotes}arrival time.{close_quotes} This problem has been studied in the operations research literature, where it has also been termed the {open_quotes}delivery-man problem{close_quotes} and the {open_quotes}traveling repairman problem.{close_quotes} The approximability of the minimum latency problem was first considered by Sahni and Gonzalez in 1976; however, unlike the classical traveling salesman problem, it is not easy to give any constant-factor approximation algorithm for the minimum latency problem. Recently, Blum, Chalasani, Coppersmith, Pulleyblank, Raghavan, and Sudan gave the first such algorithm, obtaining an approximation ratio of 144. In this work, we present an algorithm which improves this ratio to 21.55. The development of our algorithm involves a number of techniques that seem to be of interest from the perspective of the traveling salesman problem and its variants more generally.
Minimum Delay Moving Object Detection
Lao, Dong
2017-01-08
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Minimum Competency Testing and the Handicapped.
Wildemuth, Barbara M.
This brief overview of minimum competency testing and disabled high school students discusses: the inclusion or exclusion of handicapped students in minimum competency testing programs; approaches to accommodating the individual needs of handicapped students; and legal issues. Surveys of states that have mandated minimum competency tests indicate…
Do Some Workers Have Minimum Wage Careers?
Carrington, William J.; Fallick, Bruce C.
2001-01-01
Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…
Does the Minimum Wage Affect Welfare Caseloads?
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
Minimum income protection in the Netherlands
van Peijpe, T.
2009-01-01
This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its effect
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
What is the minimum number of patients for quality control of lung cancer management in Norway?
Skaug, Knut; Eide, Geir E; Gulsvik, Amund
2016-11-01
There are few data available on the optimal number of lung cancer patients needed to generate and compare estimates of quality between units managing lung cancer. The number of lung cancer patients per management unit varies considerably in Norway, where there are 42 hospitals that treated between 1 and 454 lung cancer patients in 2011. To estimate the differences in quality indicators that are of sufficient importance to change a pulmonary physician's lung cancer management program, and to estimate the size of the patient samples necessary to detect such differences. Twenty-six physicians were asked about the relative differences from a national average of quality indicators that would change their own lung cancer management program. Sample sizes were calculated to give valid estimates of quality of a management unit based on prevalence of quality indicators and minimally important differences (MID). The average MID in quality indicators that would cause a change in management varied from 18% to 24% among 26 chest physicians, depending on the indicator. To generate precise estimates for quality control of lung cancer care in Norway, the number of management units must be reduced. Given the present willingness of chest physicians to change their procedures for management of lung cancer according to the results of quality control indicators, we recommend a maximum of 10 units with a minimum of 200 incident lung cancer patients per year for each management center. © 2015 John Wiley & Sons Ltd.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
Measurement of the average lifetime of b hadrons
Adriani, O.; Aguilar-Benitez, M.; Ahlen, S.; Alcaraz, J.; Aloisio, A.; Alverson, G.; Alviggi, M. G.; Ambrosi, G.; An, Q.; Anderhub, H.; Anderson, A. L.; Andreev, V. P.; Angelescu, T.; Antonov, L.; Antreasyan, D.; Arce, P.; Arefiev, A.; Atamanchuk, A.; Azemoon, T.; Aziz, T.; Baba, P. V. K. S.; Bagnaia, P.; Bakken, J. A.; Ball, R. C.; Banerjee, S.; Bao, J.; Barillère, R.; Barone, L.; Baschirotto, A.; Battiston, R.; Bay, A.; Becattini, F.; Bechtluft, J.; Becker, R.; Becker, U.; Behner, F.; Behrens, J.; Bencze, Gy. L.; Berdugo, J.; Berges, P.; Bertucci, B.; Betev, B. L.; Biasini, M.; Biland, A.; Bilei, G. M.; Bizzarri, R.; Blaising, J. J.; Bobbink, G. J.; Bock, R.; Böhm, A.; Borgia, B.; Bosetti, M.; Bourilkov, D.; Bourquin, M.; Boutigny, D.; Bouwens, B.; Brambilla, E.; Branson, J. G.; Brock, I. C.; Brooks, M.; Bujak, A.; Burger, J. D.; Burger, W. J.; Busenitz, J.; Buytenhuijs, A.; Cai, X. D.; Capell, M.; Caria, M.; Carlino, G.; Cartacci, A. M.; Castello, R.; Cerrada, M.; Cesaroni, F.; Chang, Y. H.; Chaturvedi, U. K.; Chemarin, M.; Chen, A.; Chen, C.; Chen, G.; Chen, G. M.; Chen, H. F.; Chen, H. S.; Chen, M.; Chen, W. Y.; Chiefari, G.; Chien, C. Y.; Choi, M. T.; Chung, S.; Civinini, C.; Clare, I.; Clare, R.; Coan, T. E.; Cohn, H. O.; Coignet, G.; Colino, N.; Contin, A.; Costantini, S.; Cotorobai, F.; Cui, X. T.; Cui, X. Y.; Dai, T. S.; D'Alessandro, R.; de Asmundis, R.; Degré, A.; Deiters, K.; Dénes, E.; Denes, P.; DeNotaristefani, F.; Dhina, M.; DiBitonto, D.; Diemoz, M.; Dimitrov, H. R.; Dionisi, C.; Ditmarr, M.; Djambazov, L.; Dova, M. T.; Drago, E.; Duchesneau, D.; Duinker, P.; Duran, I.; Easo, S.; El Mamouni, H.; Engler, A.; Eppling, F. J.; Erné, F. C.; Extermann, P.; Fabbretti, R.; Fabre, M.; Falciano, S.; Fan, S. J.; Fackler, O.; Fay, J.; Felcini, M.; Ferguson, T.; Fernandez, D.; Fernandez, G.; Ferroni, F.; Fesefeldt, H.; Fiandrini, E.; Field, J. H.; Filthaut, F.; Fisher, P. H.; Forconi, G.; Fredj, L.; Freudenreich, K.; Friebel, W.; Fukushima, M.; Gailloud, M.; Galaktionov, Yu.; Gallo, E.; Ganguli, S. N.; Garcia-Abia, P.; Gele, D.; Gentile, S.; Gheordanescu, N.; Giagu, S.; Goldfarb, S.; Gong, Z. F.; Gonzalez, E.; Gougas, A.; Goujon, D.; Gratta, G.; Gruenewald, M.; Gu, C.; Guanziroli, M.; Guo, J. K.; Gupta, V. K.; Gurtu, A.; Gustafson, H. R.; Gutay, L. J.; Hangarter, K.; Hartmann, B.; Hasan, A.; Hauschildt, D.; He, C. F.; He, J. T.; Hebbeker, T.; Hebert, M.; Hervé, A.; Hilgers, K.; Hofer, H.; Hoorani, H.; Hu, G.; Hu, G. Q.; Ille, B.; Ilyas, M. M.; Innocente, V.; Janssen, H.; Jezequel, S.; Jin, B. N.; Jones, L. W.; Josa-Mutuberria, I.; Kasser, A.; Khan, R. A.; Kamyshkov, Yu.; Kapinos, P.; Kapustinsky, J. S.; Karyotakis, Y.; Kaur, M.; Khokhar, S.; Kienzle-Focacci, M. N.; Kim, J. K.; Kim, S. C.; Kim, Y. G.; Kinnison, W. W.; Kirkby, A.; Kirkby, D.; Kirsch, S.; Kittel, W.; Klimentov, A.; Klöckner, R.; König, A. C.; Koffeman, E.; Kornadt, O.; Koutsenko, V.; Koulbardis, A.; Kraemer, R. W.; Kramer, T.; Krastev, V. R.; Krenz, W.; Krivshich, A.; Kuijten, H.; Kumar, K. S.; Kunin, A.; Landi, G.; Lanske, D.; Lanzano, S.; Lebedev, A.; Lebrun, P.; Lecomte, P.; Lecoq, P.; Le Coultre, P.; Lee, D. M.; Lee, J. S.; Lee, K. Y.; Leedom, I.; Leggett, C.; Le Goff, J. M.; Leiste, R.; Lenti, M.; Leonardi, E.; Li, C.; Li, H. T.; Li, P. J.; Liao, J. Y.; Lin, W. T.; Lin, Z. Y.; Linde, F. L.; Lindemann, B.; Lista, L.; Liu, Y.; Lohmann, W.; Longo, E.; Lu, Y. S.; Lubbers, J. M.; Lübelsmeyer, K.; Luci, C.; Luckey, D.; Ludovici, L.; Luminari, L.; Lustermann, W.; Ma, J. M.; Ma, W. G.; MacDermott, M.; Malik, R.; Malinin, A.; Maña, C.; Maolinbay, M.; Marchesini, P.; Marion, F.; Marin, A.; Martin, J. P.; Martinez-Laso, L.; Marzano, F.; Massaro, G. G. G.; Mazumdar, K.; McBride, P.; McMahon, T.; McNally, D.; Merk, M.; Merola, L.; Meschini, M.; Metzger, W. J.; Mi, Y.; Mihul, A.; Mills, G. B.; Mir, Y.; Mirabelli, G.; Mnich, J.; Möller, M.; Monteleoni, B.; Morand, R.; Morganti, S.; Moulai, N. E.; Mount, R.; Müller, S.; Nadtochy, A.; Nagy, E.; Napolitano, M.; Nessi-Tedaldi, F.; Newman, H.; Neyer, C.; Niaz, M. A.; Nippe, A.; Nowak, H.; Organtini, G.; Pandoulas, D.; Paoletti, S.; Paolucci, P.; Pascale, G.; Passaleva, G.; Patricelli, S.; Paul, T.; Pauluzzi, M.; Paus, C.; Pauss, F.; Pei, Y. J.; Pensotti, S.; Perret-Gallix, D.; Perrier, J.; Pevsner, A.; Piccolo, D.; Pieri, M.; Piroué, P. A.; Plasil, F.; Plyaskin, V.; Pohl, M.; Pojidaev, V.; Postema, H.; Qi, Z. D.; Qian, J. M.; Qureshi, K. N.; Raghavan, R.; Rahal-Callot, G.; Rancoita, P. G.; Rattaggi, M.; Raven, G.; Razis, P.; Read, K.; Ren, D.; Ren, Z.; Rescigno, M.; Reucroft, S.; Ricker, A.; Riemann, S.; Riemers, B. C.; Riles, K.; Rind, O.; Rizvi, H. A.; Ro, S.; Rodriguez, F. J.; Roe, B. P.; Röhner, M.; Romero, L.; Rosier-Lees, S.; Rosmalen, R.; Rosselet, Ph.; van Rossum, W.; Roth, S.; Rubbia, A.; Rubio, J. A.; Rykaczewski, H.; Sachwitz, M.; Salicio, J.; Salicio, J. M.; Sanders, G. S.; Santocchia, A.; Sarakinos, M. S.; Sartorelli, G.; Sassowsky, M.; Sauvage, G.; Schegelsky, V.; Schmitz, D.; Schmitz, P.; Schneegans, M.; Schopper, H.; Schotanus, D. J.; Shotkin, S.; Schreiber, H. J.; Shukla, J.; Schulte, R.; Schulte, S.; Schultze, K.; Schwenke, J.; Schwering, G.; Sciacca, C.; Scott, I.; Sehgal, R.; Seiler, P. G.; Sens, J. C.; Servoli, L.; Sheer, I.; Shen, D. Z.; Shevchenko, S.; Shi, X. R.; Shumilov, E.; Shoutko, V.; Son, D.; Sopczak, A.; Soulimov, V.; Spartiotis, C.; Spickermann, T.; Spillantini, P.; Starosta, R.; Steuer, M.; Stickland, D. P.; Sticozzi, F.; Stone, H.; Strauch, K.; Stringfellow, B. C.; Sudhakar, K.; Sultanov, G.; Sun, L. Z.; Susinno, G. F.; Suter, H.; Swain, J. D.; Syed, A. A.; Tang, X. W.; Taylor, L.; Terzi, G.; Ting, Samuel C. C.; Ting, S. M.; Tonutti, M.; Tonwar, S. C.; Tóth, J.; Tsaregorodtsev, A.; Tsipolitis, G.; Tully, C.; Tung, K. L.; Ulbricht, J.; Urbán, L.; Uwer, U.; Valente, E.; Van de Walle, R. T.; Vetlitsky, I.; Viertel, G.; Vikas, P.; Vikas, U.; Vivargent, M.; Vogel, H.; Vogt, H.; Vorobiev, I.; Vorobyov, A. A.; Vuilleumier, L.; Wadhwa, M.; Wallraff, W.; Wang, C.; Wang, C. R.; Wang, X. L.; Wang, Y. F.; Wang, Z. M.; Warner, C.; Weber, A.; Weber, J.; Weill, R.; Wenaus, T. J.; Wenninger, J.; White, M.; Willmott, C.; Wittgenstein, F.; Wright, D.; Wu, S. X.; Wynhoff, S.; Wysłouch, B.; Xie, Y. Y.; Xu, J. G.; Xu, Z. Z.; Xue, Z. L.; Yan, D. S.; Yang, B. Z.; Yang, C. G.; Yang, G.; Ye, C. H.; Ye, J. B.; Ye, Q.; Yeh, S. C.; Yin, Z. W.; You, J. M.; Yunus, N.; Yzerman, M.; Zaccardelli, C.; Zaitsev, N.; Zemp, P.; Zeng, M.; Zeng, Y.; Zhang, D. H.; Zhang, Z. P.; Zhou, B.; Zhou, G. J.; Zhou, J. F.; Zhu, R. Y.; Zichichi, A.; van der Zwaan, B. C. C.; L3 Collaboration
1993-11-01
The average lifetime of b hadrons has been measured using the L3 detector at LEP, running at √ s ≈ MZ. A b-enriched sample was obtained from 432538 hadronic Z events collected in 1990 and 1991 by tagging electrons and muons from semileptonic b hadron decays. From maximum likelihood fits to the electron and muon impact parameter distributions, the average b hadron lifetime was measured to be τb = (1535 ± 35 ± 28) fs, where the first error is statistical and the second includes both the experimental and the theoretical systematic uncertainties.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Minimum degree condition forcing complete graph immersion
DeVos, Matt; Fox, Jacob; McDonald, Jessica; Mohar, Bojan; Scheide, Diego
2011-01-01
An immersion of a graph $H$ into a graph $G$ is a one-to-one mapping $f:V(H) \\to V(G)$ and a collection of edge-disjoint paths in $G$, one for each edge of $H$, such that the path $P_{uv}$ corresponding to edge $uv$ has endpoints $f(u)$ and $f(v)$. The immersion is strong if the paths $P_{uv}$ are internally disjoint from $f(V(H))$. It is proved that for every positive integer $t$, every simple graph of minimum degree at least $200t$ contains a strong immersion of the complete graph $K_t$. For dense graphs one can say even more. If the graph has order $n$ and has $2cn^2$ edges, then there is a strong immersion of the complete graph on at least $c^2 n$ vertices in $G$ in which each path $P_{uv}$ is of length 2. As an application of these results, we resolve a problem raised by Paul Seymour by proving that the line graph of every simple graph with average degree $d$ has a clique minor of order at least $cd^{3/2}$, where $c>0$ is an absolute constant. For small values of $t$, $1\\le t\\le 7$, every simple graph of...
Linear Minimum variance estimation fusion
ZHU Yunmin; LI Xianrong; ZHAO Juan
2004-01-01
This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.
Minimum Delay Moving Object Detection
Lao, Dong
2017-05-14
This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Ahmed K. Hassan
2008-01-01
Full Text Available One of the serious problems in any wireless communication system using multi carrier modulation technique like Orthogonal Frequency Division Multiplexing (OFDM is its Peak to Average Power Ratio (PAPR.It limits the transmission power due to the limitation of dynamic range of Analog to Digital Converter and Digital to Analog Converter (ADC/DAC and power amplifiers at the transmitter, which in turn sets the limit over maximum achievable rate.This issue is especially important for mobile terminals to sustain longer battery life time. Therefore reducing PAPR can be regarded as an important issue to realize efficient and affordable mobile communication services.This paper presents an efficient PAPR reduction method for OFDM signal. This method is based on clipping and iterative processing. Iterative processing is performed to limit PAPR in time domain but the subtraction process of the peak that over PAPR threshold with the original signal is done in frequency domain, not in time like usual clipping technique. The results of this method is capable of reducing the PAPR significantly with minimum bit error rate (BER degradation.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
On-Off Minimum-Time Control With Limited Fuel Usage: Global Optima Via Linear Programming
DRIESSEN,BRIAN
1999-09-01
A method for finding a global optimum to the on-off minimum-time control problem with limited fuel usage is presented. Each control can take on only three possible values: maximum, zero, or minimum. The simplex method for linear systems naturally yields such a solution for the re-formulation presented herein because it always produces an extreme point solution to the linear program. Numerical examples for the benchmark linear flexible system are presented.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
How Do Alternative Minimum Wage Variables Compare?
Sara Lemos
2005-01-01
Several minimum wage variables have been suggested in the literature. Such a variety of variables makes it difficult to compare the associated estimates across studies. One problem is that these estimates are not always calibrated to represent the effect of a 10% increase in the minimum wage. Another problem is that these estimates measure the effect of the minimum wage on the employment of different groups of workers. In this paper we critically compare employment effect estimates using five...
Minimum wages, globalization and poverty in Honduras
Gindling, T. H.; Terrell, Katherine
2008-01-01
To be competitive in the global economy, some argue that Latin American countries need to reduce or eliminate labour market regulations such as minimum wage legislation because they constrain job creation and hence increase poverty. On the other hand, minimum wage increases can have a direct positive impact on family income and may therefore help to reduce poverty. We take advantage of a complex minimum wage system in a poor country that has been exposed to the forces of globalization to test...
Tracking error with minimum guarantee constraints
Diana Barro; Elio Canestrelli
2008-01-01
In recent years the popularity of indexing has greatly increased in financial markets and many different families of products have been introduced. Often these products also have a minimum guarantee in the form of a minimum rate of return at specified dates or a minimum level of wealth at the end of the horizon. Period of declining stock market returns together with low interest rate levels on Treasury bonds make it more difficult to meet these liabilities. We formulate a dynamic asset alloca...
Level sets of multiple ergodic averages
Ai-Hua, Fan; Ma, Ji-Hua
2011-01-01
We propose to study multiple ergodic averages from multifractal analysis point of view. In some special cases in the symbolic dynamics, Hausdorff dimensions of the level sets of multiple ergodic average limit are determined by using Riesz products.
Effect of Pressure on Minimum Fluidization Velocity
Zhu Zhiping; Na Yongjie; Lu Qinggang
2007-01-01
Minimum fluidization velocity of quartz sand and glass bead under different pressures of 0.5, 1.0, 1.5 and 2.0 Mpa were investigated. The minimum fluidization velocity decreases with the increasing of pressure. The influence of pressure to the minimum fluidization velocities is stronger for larger particles than for smaller ones.Based on the test results and Ergun equation, an experience equation of minimum fluidization velocity is proposed and the calculation results are comparable to other researchers' results.
7 CFR 35.11 - Minimum requirements.
2010-01-01
..., Denmark, East Germany, England, Finland, France, Greece, Hungary, Iceland, Ireland, Italy, Liechtenstein..., Switzerland, Wales, West Germany, Yugoslavia), or Greenland shall meet each applicable minimum requirement...
Accurate Switched-Voltage voltage averaging circuit
金光, 一幸; 松本, 寛樹
2006-01-01
Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.
Spectral averaging techniques for Jacobi matrices
del Rio, Rafael; Schulz-Baldes, Hermann
2008-01-01
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
1975-06-25
conjugates of the roots of AH V. Thus the forward prediction error filter is a minimum phase filter . Since its output does not precede any of its input points...circle. The inverse of the forward Li-18 prediction error filter is also a causal minimum phase filter . The inverse filter can be used to construct the...filter is a maximum phase filter (a minimum phase filter if the direction of time is reversed). When the maxi- mum entropy assumption is valid, it
Average-Time Games on Timed Automata
Jurdzinski, Marcin; Trivedi, Ashutosh
2009-01-01
An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-tim...
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is id...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
LI Guoqing; ZONG Haifeng; ZHANG Qingyun
2011-01-01
Variation in length of day of the Earth (LOD equivalent to the Earth's rotation rate) versus change in atmospheric geopotential height fields and astronomical parameters were analyzed for the years 1962-2006.This revealed that there is a 27.3-day and an average 13.6-day periodic oscillation in LOD and atmospheric pressure fields following lunar revolution around the Earth. Accompanying the alternating change in celestial gravitation forcing on the Earth and its atmosphere, the Earth's LOD changes from minimum to maximum,then to minimum. and the atmospheric geopotential height fields in the tropics oscillate from low to high,then to low. The 27.3-day and average 13.6-day periodic atmospheric oscillation in the tropics is proposed to be a type of strong atmospheric tide, excited by celestial gravitation forcing. A formula for a Tidal Index was derived to estimate the strength of the celestial gravitation forcing, and a high degree of correlation was found between the Tidal Index determined by astronomical parameters, LOD, and atmospheric geopotential height. The reason for the atmospheric tide is periodic departure of the lunar orbit from the celestial equator during lunar revolution around the Earth. The alternating asymmetric change in celestial gravitation forcing on the Earth and its atmosphere produces a "modulation" to the change in the Earth's LOD and atmospheric pressure fields.
Stochastic variational approach to minimum uncertainty states
Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)
1995-05-21
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
2007-01-01
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
New Minimum Wage Research: A Symposium.
Ehrenberg, Ronald G.; And Others
1992-01-01
Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…
5 CFR 630.206 - Minimum charge.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum charge. 630.206 Section 630.206 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ABSENCE AND LEAVE Definitions and General Provisions for Annual and Sick Leave § 630.206 Minimum charge. (a) Unless an agency...
Stochastic variational approach to minimum uncertainty states
Illuminati, F; Illuminati, F; Viola, L
1995-01-01
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schr\\"{o}dinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials.
Monotonic Stable Solutions for Minimum Coloring Games
Hamers, H.J.M.; Miquel, S.; Norde, H.W.
2011-01-01
For the class of minimum coloring games (introduced by Deng et al. (1999)) we investigate the existence of population monotonic allocation schemes (introduced by Sprumont (1990)). We show that a minimum coloring game on a graph G has a population monotonic allocation scheme if and only if G is (P4,
MONTHLY AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN
VIGH MELINDA
2014-03-01
Full Text Available Râul Negru hydrographic basin represents a well individualised and relatively homogenous physical-geographical unity from Braşov Depression. The flow is controlled by six hydrometric stations placed on the main collector and on two of the most powerful tributaries. Our analysis period is represented by the last 25 years (1988 - 2012 and it’s acceptable for make pertinent conclusions. The maximum discharge month is April, that it’s placed in the high flow period: March – June. Minimum discharges appear in November - because of the lack of pluvial precipitations; in January because of high solid precipitations and because of water volume retention in ice. Extreme discharge frequencies vary according to their position: in the mountain area – small basin surface; into a depression – high basin surface. Variation coefficients point out very similar variation principles, showing a relative homogeneity of flow processes.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Estimating the Size and Timing of the Maximum Amplitude of Solar Cycle 24
Ke-Jun Li; Peng-Xin Gao; Tong-Wei Su
2005-01-01
A simple statistical method is used to estimate the size and timing of maximum amplitude of the next solar cycle (cycle 24). Presuming cycle 23 to be a short cycle (as is more likely), the minimum of cycle 24 should occur about December 2006 (±2 months) and the maximum, around March 2011 (±9 months),and the amplitude is 189.9 ± 15.5, if it is a fast riser, or about 136, if it is a slow riser. If we presume cycle 23 to be a long cycle (as is less likely), the minimum of cycle 24 should occur about June 2008 (±2 months) and the maximum, about February 2013 (±8 months) and the maximum will be about 137 or 80, according as the cycle is a fast riser or a slow riser.
WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES
刘永平; 许贵桥
2003-01-01
This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.
Stochastic averaging of quasi-Hamiltonian systems
朱位秋
1996-01-01
A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.
NOAA Average Annual Salinity (3-Zone)
California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
EXPERIMENTAL STUDY OF MINIMUM IGNITION TEMPERATURE
Igor WACHTER
2015-12-01
Full Text Available The aim of this scientific paper is an analysis of the minimum ignition temperature of dust layer and the minimum ignition temperatures of dust clouds. It could be used to identify the threats in industrial production and civil engineering, on which a layer of combustible dust could occure. Research was performed on spent coffee grounds. Tests were performed according to EN 50281-2-1:2002 Methods for determining the minimum ignition temperatures of dust (Method A. Objective of method A is to determine the minimum temperature at which ignition or decomposition of dust occurs during thermal straining on a hot plate at a constant temperature. The highest minimum smouldering and carbonating temperature of spent coffee grounds for 5 mm high layer was determined at the interval from 280 °C to 310 °C during 600 seconds. Method B is used to determine the minimum ignition temperature of a dust cloud. Minimum ignition temperature of studied dust was determined to 470 °C (air pressure – 50 kPa, sample weight 0.3 g.
Minimum Ballistic Factor Problem of Slender Axial Symmetric Missiles
V. B. Tawakley
1979-01-01
Full Text Available The problem of determining the geometry of slender, axisymmetric missiles of minimum ballistic factor in hypersonic flow has been solved via the calculus of variations under the assumptions that the flow is Newtonian and the surface averaged skin-friction coefficient is constant. The study has been made for conditions of given length and diameter, given diameter and surfacearea, and given surface area and length. The earlier investigations/sup 8/ where only regular shapes were determined has been extended to cover those class of bodies which consist of regular shapes followed or preceded by zero slope shapes.
A polynomial time primal network simplex algorithm for minimum cost flows
Orlin, J.B. [MIT, Cambridge, MA (United States)
1996-12-31
In this extended abstract, we develop a polynomial time primal network simplex algorithm that runs in O(min(n{sup 2}m log nC, n{sup 2}m{sup 2} log n)) time, where n is the number of nodes in the network, in is the number of arcs, and C denotes the maximum absolute arc costs if arc costs are integer and {infinity} otherwise. We first introduce a pseudopolynomial variant of the network simplex algorithm called the {open_quotes}premultiplier algorithm.{close_quotes} A vector {pi} of node potentials is called a vector of premultipliers with respect to a rooted tree if each arc directed towards the root has a non-positive reduced cost and each arc directed away from the root has a non-negative reduced cost. We then develop a cost-scaling version of the premultiplier algorithm that solves the minimum cost flow problem in O(min(nm log nC, nm{sup 2} log n)) pivots. With certain simple data structures, the average time per pivot can be shown to be O(n).
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
2010-07-01
... mercury (Hg) sorbent flow rate Hourly Once per hour ✔ ✔ Minimum pressure drop across the wet scrubber or... rural HMIWI HMIWI a with dry scrubber followed by fabric filter HMIWI a with wet scrubber HMIWI a with dry scrubber followed by fabric filter and wet scrubber Maximum operating parameters: Maximum...
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
A global-scale investigation of trends in annual maximum streamflow
Do, Hong X.; Westra, Seth; Leonard, Michael
2017-09-01
This study investigates the presence of trends in annual maximum daily streamflow data from the Global Runoff Data Centre database, which holds records of 9213 stations across the globe. The records were divided into three reference datasets representing different compromises between spatial coverage and minimum record length, followed by further filtering based on continent, Köppen-Weiger climate classification, presence of dams, forest cover changes and catchment size. Trends were evaluated using the Mann-Kendall nonparametric trend test at the 10% significance level, combined with a field significance test. The analysis found substantial differences between reference datasets in terms of the specific stations that exhibited significant increasing or decreasing trends, showing the need for careful construction of statistical methods. The results were more consistent at the continental scale, with decreasing trends for a large number of stations in western North America and the data-covered regions of Australia, and increasing trends in parts of Europe, eastern North America, parts of South America and southern Africa. Interestingly, neither the presence of dams nor changes in forest cover had a large effect on the trend results, but the catchment size was important, as catchments exhibiting increasing (decreasing) trends tended to be smaller (larger). Finally, there were more stations with significant decreasing trends than significant increasing trends across all the datasets analysed, indicating that limited evidence exists for the hypothesis that flood hazard is increasing when averaged across the data-covered regions of the globe.
Does the Minimum Wage Cause Inefficient Rationing?
何满辉; 梁明秋
2008-01-01
By not allowing wages to dearthe labor market,the minimum wage could cause workers with low reservation wages to be rationed out while equally skilled woTkers with higher reservation wages are employed.I find that proxies for reservation wages of unskilled workers in high-impact stales did not rise relative to reservation wages in other states,suggesting that the increase in the minimum wage did not cause jobs to be allocated less efficiently.However,even if rationing is efficient,the minimum wage can still entail other efficiency costs.
Minimum emittance in TBA and MBA lattices
Xu, Gang; Peng, Yue-Mei
2015-03-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Average sampling theorems for shift invariant subspaces
无
2000-01-01
The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.
Testing linearity against nonlinear moving average models
de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.
1998-01-01
Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
2007-01-01
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW situ
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW
Average excitation potentials of air and aluminium
Bogaardt, M.; Koudijs, B.
1951-01-01
By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
2010-07-19
... CFR Part 3015, Subpart V, and the final rule related notice published at 48 FR 29114, June 24, 1983... Average Payments/Maximum Reimbursement Rates AGENCY: Food and Nutrition Service, USDA. ACTION: Notice. SUMMARY: This Notice announces the annual adjustments to the ``national average payments,'' the amount...
New results on averaging theory and applications
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
Analogue Divider by Averaging a Triangular Wave
Selvam, Krishnagiri Chinnathambi
2017-08-01
A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.
Reference respiratory waveforms by minimum jerk model analysis
Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka; Yagi, Masashi; Mizuno, Hirokazu; Ogawa, Kazuhiko [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Yamadaoka 2-2, Suita-shi, Osaka 565-0871 (Japan); Ota, Seiichi [Department of Medical Technology, Osaka University Hospital, Yamadaoka 2-15, Suita-shi, Osaka 565-0871 (Japan)
2015-09-15
Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimum jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy
Picosecond mid-infrared amplifier for high average power.
Botha, LR
2007-04-01
Full Text Available are similar. The saturation fluence for a multi level system can be written as z PhEsat σ υ 2 = With σ the stimulated emission cross section and P the pressure of the laser. 1/z... is essentially the average number of populated rotational levels. For our case z=0.07 and 181054.1 −×=σ cm2. Thus for a 10 atm laser the saturation fluence is: 2 18 1334 /1173 07.01017/12 10109.210626.6 cmmJxEsat = ××× ××× = − − The maximum...
The Average-Case Area of Heilbronn-Type Triangles
Jiang, T.; Li, Ming; Vitányi, Paul
1999-01-01
From among $ {n \\choose 3}$ triangles with vertices chosen from $n$ points in the unit square, let $T$ be the one with the smallest area, and let $A$ be the area of $T$. Heilbronn's triangle problem asks for the maximum value assumed by $A$ over all choices of $n$ points. We consider the average-case: If the $n$ points are chosen independently and at random (with a uniform distribution), then there exist positive constants $c$ and $C$ such that $c/n^3 < \\mu_n < C/n^3$ for all large enough val...
Recent advances in phase shifted time averaging and stroboscopic interferometry
Styk, Adam; Józwik, Michał
2016-08-01
Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.
Mihaescu, Mihai; Murugappan, Shanmugam; Kalra, Maninder; Khosla, Sid; Gutmark, Ephraim
2008-07-19
Computational fluid dynamics techniques employing primarily steady Reynolds-Averaged Navier-Stokes (RANS) methodology have been recently used to characterize the transitional/turbulent flow field in human airways. The use of RANS implies that flow phenomena are averaged over time, the flow dynamics not being captured. Further, RANS uses two-equation turbulence models that are not adequate for predicting anisotropic flows, flows with high streamline curvature, or flows where separation occurs. A more accurate approach for such flow situations that occur in the human airway is Large Eddy Simulation (LES). The paper considers flow modeling in a pharyngeal airway model reconstructed from cross-sectional magnetic resonance scans of a patient with obstructive sleep apnea. The airway model is characterized by a maximum narrowing at the site of retropalatal pharynx. Two flow-modeling strategies are employed: steady RANS and the LES approach. In the RANS modeling framework both k-epsilon and k-omega turbulence models are used. The paper discusses the differences between the airflow characteristics obtained from the RANS and LES calculations. The largest discrepancies were found in the axial velocity distributions downstream of the minimum cross-sectional area. This region is characterized by flow separation and large radial velocity gradients across the developed shear layers. The largest difference in static pressure distributions on the airway walls was found between the LES and the k-epsilon data at the site of maximum narrowing in the retropalatal pharynx.
Long Term Care Minimum Data Set (MDS)
U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...
Quantitative Research on the Minimum Wage
Goldfarb, Robert S.
1975-01-01
The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)
Impact of the Minimum Wage on Compression.
Wolfe, Michael N.; Candland, Charles W.
1979-01-01
Assesses the impact of increases in the minimum wage on salary schedules, provides guidelines for creating a philosophy to deal with the impact, and outlines options and presents recommendations. (IRT)
Long Term Care Minimum Data Set (MDS)
U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...
Minimum wages and employment in China
Fang, Tony; Lin, Carl
2015-01-01
... that minimum wage changes led to significant adverse effects on employment in the Eastern and Central regions of China, and resulted in disemployment for females, young adults, and low-skilled workers...
Minimum Wage Policy and Country's Technical Efficiency
Mohd Zaini Abd Karim; Sok-Gee Chan; Sallahuddin Hassan
2016-01-01
.... However, some quarters argued against the idea of a nationwide minimum wage asserting that it will lead to an increase in the cost of doing business and thus will hurt Malaysian competitiveness...
Graph theory for FPGA minimum configurations
Ruan Aiwu; Li Wenchang; Xiang Chuanyin; Song Jiangmin; Kang Shi; Liao Yongbo
2011-01-01
A traditional bottom-up modeling method for minimum configuration numbers is adopted for the study of FPGA minimum configurations.This method is limited ifa large number of LUTs and multiplexers are presented.Since graph theory has been extensively applied to circuit analysis and test,this paper focuses on the modeling FPGA configurations.In our study,an internal logic block and interconnections of an FPGA are considered as a vertex and an edge connecting two vertices in the graph,respectively.A top-down modeling method is proposed in the paper to achieve minimum configuration numbers for CLB and IOB.Based on the proposed modeling approach and exhaustive analysis,the minimum configuration numbers for CLB and IOB are five and three,respectively.
Quantification of Aggregate Topology, the Minimum Dimension and Connectivity
Rai, Durgesh; Beaucage, Gregory; Ilavsky, Jan; Kammler, Hendrik
2010-03-01
The properties (electrical conductivity, diffusion coefficient, spring constant) of nanostructured ceramic aggregates can be determined only if details of the structural topology are known. For example, the mechanical strength of an aggregate depends only on the shortest average path through the aggregate, called the minimum path. Most characterization methods fail to quantify the topology. Values of the minimum dimension, associated with the minimum path, and the spectral dimension, associated with energy distribution in an aggregate have been considered only in simulations and models. Recently we have developed a method using small-angle neutron and x-ray scattering for the quantification of the details of topology in aggregated materials (Beaucage 2004, Ramachandran 2008, 2009). In situ SAXS studies of flame aerosols containing nanostructured aggregates will be presented. Their topology as a function of growth time on the millisecond time scale will be described. Beaucage G, Phys. Rev. E 70 031401 (2004).; Ramachandran R, et al. Macromolecules 41 9802-9806 (2008).; Ramachandran R, et al. Macromolecules, 42 4746-4750 (2009).
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Price pass-through and minimum wages
Daniel Aaronson
1997-01-01
A textbook consequence of competitive markets is that an industry-wide increase in the price of inputs will be passed on to consumers through an increase in prices. This fundamental implication has been explored by researchers interested in who bears the burden of taxation and exchange rate fluctuations. However, little attention has focused on the price implications of minimum wage hikes. From a policy perspective, this is an oversight. Welfare analysis of minimum wage laws should not ignore...
The minimum wage and restaurant prices
Daniel Aaronson; Eric French; MacDonald, James M.
2004-01-01
Using both store-level and aggregated price data from the food away from home component of the Consumer Price Index survey, we show that restaurant prices rise in response to an increase in the minimum wage. These results hold up when using several different sources of variation in the data. We interpret these findings within a model of employment determination. The model implies that minimum wage hikes cause employment to fall and prices to rise if labor markets are competitive but potential...
Minimum Dominating Tree Problem for Graphs
LIN Hao; LIN Lan
2014-01-01
A dominating tree T of a graph G is a subtree of G which contains at least one neighbor of each vertex of G. The minimum dominating tree problem is to find a dominating tree of G with minimum number of vertices, which is an NP-hard problem. This paper studies some polynomially solvable cases, including interval graphs, Halin graphs, special outer-planar graphs and others.
Lower bounds on the maximum energy benefit of network coding for wireless multiple unicast
Goseling, Jasper; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Weber, Jos H.
2010-01-01
We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding
Lower bounds on the maximum energy benefit of network coding for wireless multiple unicast
Goseling, Jasper; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Weber, Jos H.
2010-01-01
We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding
Averaged Lema\\^itre-Tolman-Bondi dynamics
Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried
2016-01-01
We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.
Average-passage flow model development
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW
Haiying WANG; Xinyu ZHANG; Guohua ZOU
2009-01-01
In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752
Experimental Demonstration of Squeezed State Quantum Averaging
Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.
The Average Lower Connectivity of Graphs
Ersin Aslan
2014-01-01
Full Text Available For a vertex v of a graph G, the lower connectivity, denoted by sv(G, is the smallest number of vertices that contains v and those vertices whose deletion from G produces a disconnected or a trivial graph. The average lower connectivity denoted by κav(G is the value (∑v∈VGsvG/VG. It is shown that this parameter can be used to measure the vulnerability of networks. This paper contains results on bounds for the average lower connectivity and obtains the average lower connectivity of some graphs.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.
Changing mortality and average cohort life expectancy
Schoen, Robert; Canudas-Romo, Vladimir
2005-01-01
of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...
Average subentropy, coherence and entanglement of random mixed quantum states
Zhang, Lin; Singh, Uttam; Pati, Arun K.
2017-02-01
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
Circumpolar thinning of Arctic sea ice following the 2007 record ice extent minimum
Giles, K.A.; Laxon, S. W.; Ridout, A. L.
2008-01-01
September 2007 marked a record minimum in sea ice extent. While there have been many studies published recently describing the minimum and its causes, little is known about how the ice thickness has changed in the run up to, and following, the summer of 2007. Using satellite radar altimetry data, covering the Arctic Ocean up to 81.5 degrees North, we show that the average winter sea ice thickness anomaly, after the melt season of 2007, was 0.26 m below the 2002/2003 to 2007/2008 average. More...
R Wave Extraction Based on the Maximum First Derivative plus the Maximum Value of the Double Search
Wen-po Yao; Wen-li Yao; Min Wu; Tie-bing Liu
2016-01-01
R-wave detection is the main approach for heart rate variability analysis and clinical application based on R-R interval. The maximum ifrst derivative plus the maximum value of the double search algorithm is applied on electrocardiogram (ECG) of MIH-BIT Arrhythmia Database to extract R wave. Through the study of algorithm's characteristics and R-wave detection method, data segmentation method is modified to improve the detection accuracy. After segmentation modification, average accuracy rate of 6 sets of short ECG data increase from 82.51% to 93.70%, and the average accuracy rate of 11 groups long-range data is 96.61%. Test results prove that the algorithm and segmentation method can accurately locate R wave and have good effectiveness and versatility, but may exist some undetected problems due to algorithm implementation.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Haq, I.; Stojakovic, M.; Li, M. [Ontario Power Generation Inc., Pickering, Ontario (Canada)
2011-07-01
Feeder Pipes in CANDU nuclear stations are experiencing wall thinning due to flow accelerated corrosion (FAC) resulting in locally thinned regions in addition to general thinning. In Darlington NGS these locally thinned regions can be below pressure based minimum thickness (t{sub min}), required as per ASME Code Section III NB-3600 Equation (1). A methodology is presented to qualify the locally thinned regions under NB-3200 (NB-3213 and NB-3221) for internal pressure loading only. Detailed finite element models are used for internal pressure analysis using ANSYS v11.0. All other loadings such as deadweight, thermal and seismic loadings are qualified under NB-3600 using a general purpose piping stress analysis software. The piping stress analysis is based on average thickness equal to t{sub min} along with maximum values of ASME Code stress indices (Table NB-3681(a)-1). The requirement for the use of this methodology is that the average thickness of each cross-section with the locally thinned region shall be at least t{sub min}. The finite element analysis models are thinned to 0.75 t{sub min} (in increments of 0.05 t{sub min}) all-around the circumference in the straight section region allowing for flexible inspection requirements. Two different thicknesses of 1.10 t{sub min} and 1.30 t{sub min} are assigned to the bends. Thickness vs the allowable axial extent curves were developed for different types of feeder pipes in service. Feeders differ in pipe size, straight section length, bend angle and orientation. The stress analysis results show that all Darlington NGS outlet feeder pipes are fit for service with locally thinned regions up to 75% of the pressure based minimum thickness. This paper demonstrates the effectiveness of finite element analysis in extending the useful life of degraded piping components. (author)
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Appeals Council Requests - Average Processing Time
Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...
Average Vegetation Growth 1990 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1997 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1992 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2001 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1995 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1995 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2000 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1998 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1994 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Vegetation Growth 1996 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1996 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2005 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2005 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1993 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Spacetime Average Density (SAD) Cosmological Measures
Page, Don N
2014-01-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-01
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Monthly snow/ice averages (ISCCP)
National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...
Average Annual Precipitation (PRISM model) 1961 - 1990
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Symmetric Euler orientation representations for orientational averaging.
Mayerhöfer, Thomas G
2005-09-01
A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.
Cosmic Inhomogeneities and the Average Cosmological Dynamics
Paranjape, Aseem; Singh, T. P.
2008-01-01
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a `dark energy'. However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the \\emph{in}homogeneous Universe, the averaged \\emph{homogeneous} Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic ini...
Average Bandwidth Allocation Model of WFQ
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
The inverse maximum flow problem with lower and upper bounds for the flow
Deaconu Adrian
2008-01-01
Full Text Available The general inverse maximum flow problem (denoted GIMF is considered, where lower and upper bounds for the flow are changed so that a given feasible flow becomes a maximum flow and the distance (considering l1 norm between the initial vector of bounds and the modified vector is minimum. Strongly and weakly polynomial algorithms for solving this problem are proposed. In the paper it is also proved that the inverse maximum flow problem where only the upper bound for the flow is changed (IMF is a particular case of the GIMF problem.
Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan
2015-10-01
Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.
Abdussamatov Habibullo
2015-01-01
Full Text Available The average annual decreasing rate of the total solar irradiance (TSI is increasing from the 22-nd to the 23-rd and 24-th cycles, because the Sun since the 1990 is in the phase decline of quasi-bicentennial variation. The portion of the solar energy absorbed by the Earth is decreasing. Decrease in the portion of TSI absorbed by the Earth since 1990 remains uncompensated by the Earth's radiation into space at the previous high level over a time interval determined by the thermal inertia of the Ocean. A long-term negative deviation of the Earth’s average annual energy balance from the equilibrium state is dictating corresponding variations in it’s the energy state. As a result, the Earth will have a negative average annual energy balance also in the future. This will lead to the beginning of the decreasing in the Earth's temperature and of the epoch of the Little Ice Age after the maximum phase of the 24-th solar cycle approximately since the end of 2014. The influence of the consecutive chain of the secondary feedback effects (the increase in the Bond albedo and the decrease in the concentration of greenhouse gases in the atmosphere due to cooling will lead to an additional reduction of the absorbed solar energy and reduce the greenhouse effect. The start of the TSI’s Grand Minimum is anticipated in the solar cycle 27±1 in 2043±11 and the beginning of the phase of deep cooling of the 19th Little Ice Age for the past 7,500 years around 2060±11.
Load averaging system for co-generation plant; Jikayo hatsuden setsubi ni okeru fuka heijunka system
Ueno, Y. [Fuji Electric Co. Ltd., Tokyo (Japan)
1995-07-30
MAZDA Motor Corp. planed the construction of a 20.5MW co-generation plant in 1991 for responding to an increase in power demand due to expansion of the Hofu factory. On introduction of this co-generation plant, it was decided that the basic system would adopt the following. (1) A circulating fluidized bed boiler which can be operated by burning multiple kinds of fuels with minimum environmental pollution. (2) A heat accumulation system which can be operated through reception of a constant power from electric power company despite a sudden and wide range change in power demand. (3) A circulating-water exchange heat recovery system which recovers exhaust heat of the turbine plant as the hot water to be utilized for heating and air-conditioning of the factory mainly in winter. Power demand in MAZDA`s Hofu factory changes 15% per minute within a maximum range from 20MW to 8MW. This change is difficult to be followed even by an oil burning boiler excellent in load follow-up. The circulating Fluidized bed boiler employed this time is lower in the follow-up performance than the oil boiler. For the newly schemed plant, however, load averaging system named a heat accumulation system capable of responding fully to the above change has been developed. This co-generation plant satisfied the official inspection before commercial operation according the Ministerial Ordinance in 1993. Since then, with regard to the rapid load following, which was one of the initial targets, operation is now performed steadily. This paper introduces an outline of the system and operation conditions. 10 refs.
Nowcasting daily minimum air and grass temperature
Savage, M. J.
2016-02-01
Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) errors for grass minimum temperature and the 4-h nowcasts.
Averaged controllability of parameter dependent conservative semigroups
Lohéac, Jérôme; Zuazua, Enrique
2017-02-01
We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.
Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average
U.S. Environmental Protection Agency — This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895–2015). To provide more detailed information,...
2013-04-16
... Nutrition (HFS- 850), Food and Drug Administration, 5100 Paint Branch Pkwy, College Park, MD 20740, 240-402..., Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 350a(i)) establishes requirements for the nutrient... infant formula, a food that is intended to be the sole source of nutrition for infants and...
Kandaswamy, Krishna Kumar Umar
2013-01-01
The extracellular matrix (ECM) is a major component of tissues of multicellular organisms. It consists of secreted macromolecules, mainly polysaccharides and glycoproteins. Malfunctions of ECM proteins lead to severe disorders such as marfan syndrome, osteogenesis imperfecta, numerous chondrodysplasias, and skin diseases. In this work, we report a random forest approach, EcmPred, for the prediction of ECM proteins from protein sequences. EcmPred was trained on a dataset containing 300 ECM and 300 non-ECM and tested on a dataset containing 145 ECM and 4187 non-ECM proteins. EcmPred achieved 83% accuracy on the training and 77% on the test dataset. EcmPred predicted 15 out of 20 experimentally verified ECM proteins. By scanning the entire human proteome, we predicted novel ECM proteins validated with gene ontology and InterPro. The dataset and standalone version of the EcmPred software is available at http://www.inb.uni-luebeck.de/tools-demos/Extracellular_matrix_proteins/EcmPred. © 2012 Elsevier Ltd.
Lee, Sang-Yong; Ortega, Antonio
2000-04-01
We address the problem of online rate control in digital cameras, where the goal is to achieve near-constant distortion for each image. Digital cameras usually have a pre-determined number of images that can be stored for the given memory size and require limited time delay and constant quality for each image. Due to time delay restrictions, each image should be stored before the next image is received. Therefore, we need to define an online rate control that is based on the amount of memory used by previously stored images, the current image, and the estimated rate of future images. In this paper, we propose an algorithm for online rate control, in which an adaptive reference, a 'buffer-like' constraint, and a minimax criterion (as a distortion metric to achieve near-constant quality) are used. The adaptive reference is used to estimate future images and the 'buffer-like' constraint is required to keep enough memory for future images. We show that using our algorithm to select online bit allocation for each image in a randomly given set of images provides near constant quality. Also, we show that our result is near optimal when a minimax criterion is used, i.e., it achieves a performance close to that obtained by applying an off-line rate control that assumes exact knowledge of the images. Suboptimal behavior is only observed in situations where the distribution of images is not truly random (e.g., if most of the 'complex' images are captured at the end of the sequence.) Finally, we propose a T- step delay rate control algorithm and using the result of 1- step delay rate control algorithm, we show that this algorithm removes the suboptimal behavior.
Kandaswamy, Krishna Kumar; Pugalenthi, Ganesan; Kalies, Kai-Uwe; Hartmann, Enno; Martinetz, Thomas
2013-01-21
The extracellular matrix (ECM) is a major component of tissues of multicellular organisms. It consists of secreted macromolecules, mainly polysaccharides and glycoproteins. Malfunctions of ECM proteins lead to severe disorders such as marfan syndrome, osteogenesis imperfecta, numerous chondrodysplasias, and skin diseases. In this work, we report a random forest approach, EcmPred, for the prediction of ECM proteins from protein sequences. EcmPred was trained on a dataset containing 300 ECM and 300 non-ECM and tested on a dataset containing 145 ECM and 4187 non-ECM proteins. EcmPred achieved 83% accuracy on the training and 77% on the test dataset. EcmPred predicted 15 out of 20 experimentally verified ECM proteins. By scanning the entire human proteome, we predicted novel ECM proteins validated with gene ontology and InterPro. The dataset and standalone version of the EcmPred software is available at http://www.inb.uni-luebeck.de/tools-demos/Extracellular_matrix_proteins/EcmPred.
2015-12-15
Imager (GUVI) onboard of the NASA/TIMED satellite. Definition of NCAR’s Role : NCAR PI H.-L. Liu will help the PI (F. Sassi) interfacing the DAS...and delivering WACCM-X model to the overall project PI, Dr. F. Sassi and the NRL team ; (2) enabling coupling of WACCM-X with the NAVDAS system... team members in the validation of the thermospheric products. Accomplishments A. WACCM-X Development: The NCAR Whole Atmosphere Community Climate
Impact of cigarette minimum price laws on the retail price of cigarettes in the USA.
Tynan, Michael A; Ribisl, Kurt M; Loomis, Brett R
2013-05-01
Cigarette price increases prevent youth initiation, reduce cigarette consumption and increase the number of smokers who quit. Cigarette minimum price laws (MPLs), which typically require cigarette wholesalers and retailers to charge a minimum percentage mark-up for cigarette sales, have been identified as an intervention that can potentially increase cigarette prices. 24 states and the District of Columbia have cigarette MPLs. Using data extracted from SCANTRACK retail scanner data from the Nielsen company, average cigarette prices were calculated for designated market areas in states with and without MPLs in three retail channels: grocery stores, drug stores and convenience stores. Regression models were estimated using the average cigarette pack price in each designated market area and calendar quarter in 2009 as the outcome variable. The average difference in cigarette pack prices are 46 cents in the grocery channel, 29 cents in the drug channel and 13 cents in the convenience channel, with prices being lower in states with MPLs for all three channels. The findings that MPLs do not raise cigarette prices could be the result of a lack of compliance and enforcement by the state or could be attributed to the minimum state mark-up being lower than the free-market mark-up for cigarettes. Rather than require a minimum mark-up, which can be nullified by promotional incentives and discounts, states and countries could strengthen MPLs by setting a simple 'floor price' that is the true minimum price for all cigarettes or could prohibit discounts to consumers and retailers.
Cosmic structure, averaging and dark energy
Wiltshire, David L
2013-01-01
These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...
Books average previous decade of economic misery.
R Alexander Bentley
Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
High Average Power Yb:YAG Laser
Zapata, L E; Beach, R J; Payne, S A
2001-05-23
We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.
无
2003-01-01
This study investigate the relationships between geomorphometric properties and the minimum low flow discharge of undisturbeddrainage basins in the Taman Bukit Cahaya Seri Alam Forest Reserve, Peninsular Malaysia.The drainage basins selected were third-orderbasins so as to facilitate a common base for sampling and performing an unbiased statistical analyses.Three levels of relationships were observedin the study.Significant relationships existed between the geomorphometric properties as shown by the correlation network analysis; secondly,individual geomorphometric properties were observed to influence minimum flow discharge; and finally, the multiple regression model set upshowed that minimum flow discharge( Q min) was dependent of basin area(AU), stream length( LS ), maximum relief( Hmax), average relief(HAV) and stream frequency (SF).These findings further enforced other studies of this nature that drainage basins were dynamic andfunctional entities whose operations were governed by complex interrelationships occurring within the basins.Changes to any of thegeomorphometric properties would influence their role as basin regulators thus influencing a change in basin response.In the case of the basin's minimum low flow, a change in any of the properties considered in the regression model influenced the "time to peak" of flow.A shorter timeperiod would mean higher discharge, which is generally considered the prerequisite to flooding.This research also conclude that the role ofgeomorphometric properties to control the water supply within the stream through out the year even though during the drought and lessprecipitations months.Drainage basins are sensitive entities and any deteriorations involve will generate reciprocals and response to the watersupply as well as the habitat within the areas.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real
A singularity theorem based on spatial averages
J M M Senovilla
2007-07-01
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS
K. L. Goluoglu
2000-06-09
The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.
An approximate analytical approach to resampling averages
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
Grassmann Averages for Scalable Robust PCA
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Minimum description length synthetic aperture radar image segmentation.
Galland, Frédéric; Bertaux, Nicolas; Réfrégier, Philippe
2003-01-01
We present a new minimum description length (MDL) approach based on a deformable partition--a polygonal grid--for automatic segmentation of a speckled image composed of several homogeneous regions. The image segmentation thus consists in the estimation of the polygonal grid, or, more precisely, its number of regions, its number of nodes and the location of its nodes. These estimations are performed by minimizing a unique MDL criterion which takes into account the probabilistic properties of speckle fluctuations and a measure of the stochastic complexity of the polygonal grid. This approach then leads to a global MDL criterion without an undetermined parameter since no other regularization term than the stochastic complexity of the polygonal grid is necessary and noise parameters can be estimated with maximum likelihood-like approaches. The performance of this technique is illustrated on synthetic and real synthetic aperture radar images of agricultural regions and the influence of different terms of the model is analyzed.
Hu, Jing; Hu, Jie; Wang, Yuanmei
2003-03-01
In magnetoencepholography(MEG) inverse research, according to the point source model and distributed source model, the neuromagnetic source reconstruction methods are classified as parametric current dipole localization and nonparametric source imaging (or current density reconstruction). MEG source imaging technique can be formulated as an inherent ill-posed and highly underdetermined linear inverse problem. In order to yield a robust and plausible neural current distribution image, various approaches have been proposed. Among those, the weighted minimum-norm estimation with Tikhonov regularization is a popular technique. The authors present a relatively overall theoretical framework Followed by a discussion of the development, several regularized minimum-norm algorithms have been described in detail, including the depth normalization, low resolution electromagnetic tomography(LORETA), focal underdetermined system solver(FOCUSS), selective minimum-norm(SMN). In addition, some other imaging methods, e.g., maximum entropy method(MEM), the method incorporating other brain functional information such as fMRI data and maximum a posteriori(MAP) method using Markov random field model, are explained as well. From the generalized point of view based on minimum-norm estimation with Tikhonov regularization, all these algorithms are aiming to resolve the tradeoff between fidelity to the measured data and the constraints assumptions about the neural source configuration such as anatomical and physiological information. In conclusion, almost all the source imaging approaches can be consistent with the regularized minimum-norm estimation to some extent.
Deep solar minimum and global climate changes
Ahmed A. Hady
2013-05-01
Full Text Available This paper examines the deep minimum of solar cycle 23 and its potential impact on climate change. In addition, a source region of the solar winds at solar activity minimum, especially in the solar cycle 23, the deepest during the last 500 years, has been studied. Solar activities have had notable effect on palaeoclimatic changes. Contemporary solar activity are so weak and hence expected to cause global cooling. Prevalent global warming, caused by building-up of green-house gases in the troposphere, seems to exceed this solar effect. This paper discusses this issue.
A minimum achievable PV electrical generating cost
Sabisky, E.S. [11 Carnation Place, Lawrenceville, NJ 08648 (United States)
1996-03-22
The role and share of photovoltaic (PV) generated electricity in our nation`s future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price
Weight-Constrained Minimum Spanning Tree Problem
Henn, Sebastian Tobias
2007-01-01
In an undirected graph G we associate costs and weights to each edge. The weight-constrained minimum spanning tree problem is to find a spanning tree of total edge weight at most a given value W and minimum total costs under this restriction. In this thesis a literature overview on this NP-hard problem, theoretical properties concerning the convex hull and the Lagrangian relaxation are given. We present also some in- and exclusion-test for this problem. We apply a ranking algorithm and the me...
On the average uncertainty for systems with nonlinear coupling
Nelson, Kenric P.; Umarov, Sabir R.; Kon, Mark A.
2017-02-01
The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale (i.e. at the width of the distribution). The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Rényi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.
XMM-Newton Observations of the 2003 X-Ray Minimum of Eta Carinae
Hamaguchi, K.; Corcoran, M. F.; White, N. E.; Damineli, A.; Davidson, K.; Gull, T. R.
2004-01-01
The XMM-Newton X-ray observatory took part in the multi-wavelength observing campaign of the massive, evolved star Eta Carinae in 2003 during its recent X-ray minimum in June 2003. This paper reports on the first results of these observations, which were performed (1) before the minimum (five times in January, 2003), (2) near the X-ray maximum just before the minimum (two times in June) and (3) during the minimum (four times in July-August). Hard X-ray emission from the point source of Eta Carinae was detected even during the minimum. The observed flux above 3 keV was approx. 3x10(exp -12) ergs cm(exp -2)/s, which is about one percent of the flux before the minimum. Light curves from the individual observations show no time variability on the scale of a few kilo-seconds. Changes in the spectral shape occurred, but these changes were smaller than expected if the minimum is produced solely by an increase of hydrogen column density. Fits of the hard X-Ray source by an absorbed 1T model show a constant plasma temperature at around 5 keV and an increase of column density from 5x10(exp 22) cm(exp -2) to 2x10(exp 23) cm(exp -2). The spectra below 6 keV significantly deviate from the models that fit the higher energy emission. The X-ray minimum seems to be dominated by an apparent decrease of the emission measure, suggesting that the brightest part of the X-ray emitting region is completely obscured during the minimum in the form of an eclipse. Partial covering plasma emission models might be considered for the spectral variation. The spectra also showed strong iron K line emission from both hot and cold gases, and weak line emission from Ni, Ca, Ar, S and Si.
Constructing minimum-cost flow-dependent networks
Thomas, Doreen A.; Weng, Jia F.
2002-09-01
In the construction of a communication network, the length of the network is an important but not unique factor determining the cost of the network. Among many possible network models, Gilbert proposed a flow-dependent model in which flow demands are assigned between each pair of points in a given point set A, and the cost per unit length of a link in the network is a function of the flow through the link. In this paper we first investigate the properties of this Gilbert model: the concavity of the cost function, decomposition, local minimality, the number of Steiner points and the maximum degree of Steiner points. Then we propose three heuristics for constructing minimum cost Gilbert networks. Two of them come from the fact that generally a minimum cost Gilbert network stands between two extremes: the complete network G(A) on A and the edge-weighted Steiner minimal tree W(A) on A. The first heuristic starts with G(A) and reduces the cost by splitting angles; the second one starts with both G(A) and W(A), and reduces the cost by selecting low cost paths. As a generalisation of the second heuristic, the third heuristic constructs a new Gilbert network of less cost by hybridising known Gilbert networks. Finally we discuss some considerations in practical applications.
Phylogenetic Applications of the Minimum Contradiction Approach on Continuous Characters
Marc Thuillard
2009-01-01
Full Text Available We describe the conditions under which a set of continuous variables or characters can be described as an X-tree or a split network. A distance matrix corresponds exactly to a split network or a valued X-tree if, after ordering of the taxa, the variables values can be embedded into a function with at most a local maximum and a local minimum, and crossing any horizontal line at most twice. In real applications, the order of the taxa best satisfying the above conditions can be obtained using the Minimum Contradiction method. This approach is applied to 2 sets of continuous characters. The first set corresponds to craniofacial landmarks in Hominids. The contradiction matrix is used to identify possible tree structures and some alternatives when they exist. We explain how to discover the main structuring characters in a tree. The second set consists of a sample of 100 galaxies. In that second example one shows how to discretize the continuous variables describing physical properties of the galaxies without disrupting the underlying tree structure.
On averaging methods for partial differential equations
Verhulst, F.
2001-01-01
The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which
Discontinuities and hysteresis in quantized average consensus
Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo
2011-01-01
We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering
Bayesian Averaging is Well-Temperated
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...
A Functional Measurement Study on Averaging Numerosity
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Generalized Jackknife Estimators of Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...
Bootstrapping Density-Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...
Quantum Averaging of Squeezed States of Light
Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
High average-power induction linacs
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.
1989-03-15
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.
High Average Power Optical FEL Amplifiers
Ben-Zvi, I; Litvinenko, V
2005-01-01
Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...
Full averaging of fuzzy impulsive differential inclusions
Natalia V. Skripnik
2010-09-01
Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.
Materials for high average power lasers
Marion, J.E.; Pertica, A.J.
1989-01-01
Unique materials properties requirements for solid state high average power (HAP) lasers dictate a materials development research program. A review of the desirable laser, optical and thermo-mechanical properties for HAP lasers precedes an assessment of the development status for crystalline and glass hosts optimized for HAP lasers. 24 refs., 7 figs., 1 tab.