Minimum redundancy maximum relevance feature selection approach for temporal gene expression data.
Radovic, Milos; Ghalwash, Mohamed; Filipovic, Nenad; Obradovic, Zoran
2017-01-03
Feature selection, aiming to identify a subset of features among a possibly large set of features that are relevant for predicting a response, is an important preprocessing step in machine learning. In gene expression studies this is not a trivial task for several reasons, including potential temporal character of data. However, most feature selection approaches developed for microarray data cannot handle multivariate temporal data without previous data flattening, which results in loss of temporal information. We propose a temporal minimum redundancy - maximum relevance (TMRMR) feature selection approach, which is able to handle multivariate temporal data without previous data flattening. In the proposed approach we compute relevance of a gene by averaging F-statistic values calculated across individual time steps, and we compute redundancy between genes by using a dynamical time warping approach. The proposed method is evaluated on three temporal gene expression datasets from human viral challenge studies. Obtained results show that the proposed method outperforms alternatives widely used in gene expression studies. In particular, the proposed method achieved improvement in accuracy in 34 out of 54 experiments, while the other methods outperformed it in no more than 4 experiments. We developed a filter-based feature selection method for temporal gene expression data based on maximum relevance and minimum redundancy criteria. The proposed method incorporates temporal information by combining relevance, which is calculated as an average F-statistic value across different time steps, with redundancy, which is calculated by employing dynamical time warping approach. As evident in our experiments, incorporating the temporal information into the feature selection process leads to selection of more discriminative features.
3D facial expression recognition using maximum relevance minimum redundancy geometrical features
Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce
2012-12-01
In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.
Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Yan, Xiaozhen; Xie, Wu; Xu, Zhen
2016-12-01
Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy.
Aggarwal, Namita; Rana, Bharti; Agrawal, R K; Kumaran, Senthil
2015-01-01
In this paper, we propose a three-phased method for diagnosis of Alzheimer's disease using the structural magnetic resonance imaging (MRI). In first phase, gray matter tissue probability map is obtained from every brain MRI volume. Further, five regions of interest (ROIs) are extracted as per prior knowledge. In second phase, features are extracted from each ROI using 3D dual-tree discrete wavelet transform. In third phase, relevant features are selected using minimum redundancy maximum relevance features selection technique. The decision model is built with features so obtained, using a classifier. To evaluate the effectiveness of the proposed method, experiments are performed with four well-known classifiers on four data sets, built from a publicly available OASIS database. The performance is evaluated in terms of sensitivity, specificity and classification accuracy. It was observed that the proposed method outperforms existing methods in terms of all three performance measures. This is further validated with statistical tests.
Hejazi, Mohamad I.; Cai, Ximing
2009-04-01
Input variable selection (IVS) is a necessary step in modeling water resources systems. Neglecting this step may lead to unnecessary model complexity and reduced model accuracy. In this paper, we apply the minimum redundancy maximum relevance (MRMR) algorithm to identifying the most relevant set of inputs in modeling a water resources system. We further introduce two modified versions of the MRMR algorithm ( α-MRMR and β-MRMR), where α and β are correction factors that are found to increase and decrease as a power-law function, respectively, with the progress of the input selection algorithms and the increase of the number of selected input variables. We apply the proposed algorithms to 22 reservoirs in California to predict daily releases based on a set from a 121 potential input variables. Results indicate that the two proposed algorithms are good measures of model inputs as reflected in enhanced model performance. The α-MRMR and β-MRMR values exhibit strong negative correlation to model performance as depicted in lower root-mean-square-error (RMSE) values.
Liu, Lili; Chen, Lei; Zhang, Yu-Hang; Wei, Lai; Cheng, Shiwen; Kong, Xiangyin; Zheng, Mingyue; Huang, Tao; Cai, Yu-Dong
2017-02-01
Drug-drug interaction (DDI) defines a situation in which one drug affects the activity of another when both are administered together. DDI is a common cause of adverse drug reactions and sometimes also leads to improved therapeutic effects. Therefore, it is of great interest to discover novel DDIs according to their molecular properties and mechanisms in a robust and rigorous way. This paper attempts to predict effective DDIs using the following properties: (1) chemical interaction between drugs; (2) protein interactions between the targets of drugs; and (3) target enrichment of KEGG pathways. The data consisted of 7323 pairs of DDIs collected from the DrugBank and 36,615 pairs of drugs constructed by randomly combining two drugs. Each drug pair was represented by 465 features derived from the aforementioned three categories of properties. The random forest algorithm was adopted to train the prediction model. Some feature selection techniques, including minimum redundancy maximum relevance and incremental feature selection, were used to extract key features as the optimal input for the prediction model. The extracted key features may help to gain insights into the mechanisms of DDIs and provide some guidelines for the relevant clinical medication developments, and the prediction model can give new clues for identification of novel DDIs.
Minimum Length - Maximum Velocity
Panes, Boris
2011-01-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.
ShaoPeng Wang
2016-01-01
Full Text Available The development of biochemistry and molecular biology has revealed an increasingly important role of compounds in several biological processes. Like the aptamer-protein interaction, aptamer-compound interaction attracts increasing attention. However, it is time-consuming to select proper aptamers against compounds using traditional methods, such as exponential enrichment. Thus, there is an urgent need to design effective computational methods for searching effective aptamers against compounds. This study attempted to extract important features for aptamer-compound interactions using feature selection methods, such as Maximum Relevance Minimum Redundancy, as well as incremental feature selection. Each aptamer-compound pair was represented by properties derived from the aptamer and compound, including frequencies of single nucleotides and dinucleotides for the aptamer, as well as the constitutional, electrostatic, quantum-chemical, and space conformational descriptors of the compounds. As a result, some important features were obtained. To confirm the importance of the obtained features, we further discussed the associations between them and aptamer-compound interactions. Simultaneously, an optimal prediction model based on the nearest neighbor algorithm was built to identify aptamer-compound interactions, which has the potential to be a useful tool for the identification of novel aptamer-compound interactions. The program is available upon the request.
Xin Ma
2015-01-01
Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.
Minimum length-maximum velocity
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
Liu, Tong; Hu, Liang; Ma, Chao; Wang, Zhi-Yan; Chen, Hui-Ling
2015-04-01
In this paper, a novel hybrid method, which integrates an effective filter maximum relevance minimum redundancy (MRMR) and a fast classifier extreme learning machine (ELM), has been introduced for diagnosing erythemato-squamous (ES) diseases. In the proposed method, MRMR is employed as a feature selection tool for dimensionality reduction in order to further improve the diagnostic accuracy of the ELM classifier. The impact of the type of activation functions, the number of hidden neurons and the size of the feature subsets on the performance of ELM have been investigated in detail. The effectiveness of the proposed method has been rigorously evaluated against the ES disease dataset, a benchmark dataset, from UCI machine learning database in terms of classification accuracy. Experimental results have demonstrated that our method has achieved the best classification accuracy of 98.89% and an average accuracy of 98.55% via 10-fold cross-validation technique. The proposed method might serve as a new candidate of powerful methods for diagnosing ES diseases.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
An application of Hamiltonian neurodynamics using Pontryagin's Maximum (Minimum) Principle.
Koshizen, T; Fulcher, J
1995-12-01
Classical optimal control methods, notably Pontryagin's Maximum (Minimum) Principle (PMP) can be employed, together with Hamiltonians, to determine optimal system weights in Artificial Neural dynamical systems. A new learning rule based on weight equations derived using PMP is shown to be suitable for both discrete- and continuous-time systems, and moreover, can also be applied to feedback networks. Preliminary testing shows that this PMP learning rule compares favorably with Standard BackPropagations (SBP) on the XOR problem.
Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs
Italiano, Giuseppe F
2010-01-01
In this paper we study minimum cut and maximum flow problems on planar graphs, both in static and in dynamic settings. First, we present an algorithm that given an undirected planar graph computes the minimum cut between any two given vertices in O(n log log n) time. Second, we show how to achieve the same O(n log log n) bound for the problem of computing maximum flows in undirected planar graphs. To the best of our knowledge, these are the first algorithms for those two problems that break the O(n log n) barrier, which has been standing for more than 25 years. Third, we present a fully dynamic algorithm that is able to maintain information about minimum cuts and maximum flows in a plane graph (i.e., a planar graph with a fixed embedding): our algorithm is able to insert edges, delete edges and answer min-cut and max-flow queries between any pair of vertices in O(n^(2/3) log^3 n) time per operation. This result is based on a new dynamic shortest path algorithm for planar graphs which may be of independent int...
Maximum, minimum, and optimal mutation rates in dynamic environments
Ancliff, Mark; Park, Jeong-Man
2009-12-01
We analyze the dynamics of the parallel mutation-selection quasispecies model with a changing environment. For an environment with the sharp-peak fitness function in which the most fit sequence changes by k spin flips every period T , we find analytical expressions for the minimum and maximum mutation rates for which a quasispecies can survive, valid in the limit of large sequence size. We find an asymptotic solution in which the quasispecies population changes periodically according to the periodic environmental change. In this state we compute the mutation rate that gives the optimal mean fitness over a period. We find that the optimal mutation rate per genome, k/T , is independent of genome size, a relationship which is observed across broad groups of real organisms.
How do GCMs represent daily maximum and minimum temperatures in La Plata Basin?
Bettolli, M. L.; Penalba, O. C.; Krieger, P. A.
2013-05-01
This work focuses on southern La Plata Basin region which is one of the most important agriculture and hydropower producing regions worldwide. Extreme climate events such as cold and heat waves and frost events have a significant socio-economic impact. It is a big challenge for global climate models (GCMs) to simulate regional patterns, temporal variations and distribution of temperature in a daily basis. Taking into account the present and future relevance of the region for the economy of the countries involved, it is very important to analyze maximum and minimum temperatures for model evaluation and development. This kind of study is aslo the basis for a great deal of the statistical downscaling methods in a climate change context. The aim of this study is to analyze the ability of the GCMs to reproduce the observed daily maximum and minimum temperatures in the southern La Plata Basin region. To this end, daily fields of maximum and minimum temperatures from a set of 15 GCMs were used. The outputs corresponding to the historical experiment for the reference period 1979-1999 were obtained from the WCRP CMIP5 (World Climate Research Programme Coupled Model Intercomparison Project Phase 5). In order to compare daily temperature values in the southern La Plata Basin region as generated by GCMs to those derived from observations, daily maximum and minimum temperatures were used from the gridded dataset generated by the Claris LPB Project ("A Europe-South America Network for Climate Change Assessment and Impact Studies in La Plata Basin"). Additionally, reference station data was included in the study. The analysis was focused on austral winter (June, July, August) and summer (December, January, February). The study was carried out by analyzing the performance of the 15 GCMs , as well as their ensemble mean, in simulating the probability distribution function (pdf) of maximum and minimum temperatures which include mean values, variability, skewness, et c, and regional
CO2 maximum in the oxygen minimum zone (OMZ)
Paulmier, A.; Ruiz-Pino, D.; Garçon, V.
2011-02-01
Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O22225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the oxycline, by a "carbon excess" induced by a specific remineralization. Indeed, a possible co-existence of bacterial heterotrophic and autotrophic processes usually occurring at different depths could
CO2 maximum in the oxygen minimum zone (OMZ
V. Garçon
2011-02-01
Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence
THE MAXIMUM AND MINIMUM DEGREES OF RANDOM BIPARTITE MULTIGRAPHS
Chen Ailian; Zhang Fuji; Li Hao
2011-01-01
In this paper the authors generalize the classic random bipartite graph model, and define a model of the random bipartite multigraphs as follows: let m=m(n) be a positive integer-valued function on n and (n, m; {pk}) the probability space consisting of all the labeled bipartite multigraphs with two vertex sets A={a1,a2,...,an} and B= {b1, b2,..., bm}, in which the numbers taibj of the edges between any two vertices ai∈A and bj∈B are identically distributed independent random variables with distribution P{taibj}=k}=pk, k=0, 1, 2,..., where pk≥0 and ∑ pk=1. They obtain that Xc,d,A, the number of vertices in A with degree between c and d of Gn,m∈ (n, m;{Pk}) has asymptotically Poisson distribution, and answer the following two questions about the space (n,m; {pk}) with {pk} having geometric distribution, binomial distribution and Poisson distribution, respectively. Under which condition for {Pk} can there be a function D(n) such that almost every random multigraph Gnm∈ (n, m; {pk}) has maximum degree D(n) in A? under which condition for {pk} has almost every multigraph Gn,m∈ (n,m;{pk}) a unique vertex of maximum degree in A?
Minimum disturbance rewards with maximum possible classical correlations
Pande, Varad R., E-mail: varad_pande@yahoo.in [Department of Physics, Indian Institute of Science Education and Research Pune, 411008 (India); Shaji, Anil [School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, 695016 (India)
2017-07-12
Weak measurements done on a subsystem of a bipartite system having both classical and nonClassical correlations between its components can potentially reveal information about the other subsystem with minimal disturbance to the overall state. We use weak quantum discord and the fidelity between the initial bipartite state and the state after measurement to construct a cost function that accounts for both the amount of information revealed about the other system as well as the disturbance to the overall state. We investigate the behaviour of the cost function for families of two qubit states and show that there is an optimal choice that can be made for the strength of the weak measurement. - Highlights: • Weak measurements done on one part of a bipartite system with controlled strength. • Weak quantum discord & fidelity used to quantify all correlations and disturbance. • Cost function to probe the tradeoff between extracted correlations and disturbance. • Optimal measurement strength for maximum extraction of classical correlations.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Measurement and relevance of maximum metabolic rate in fishes.
Norin, T; Clark, T D
2016-01-01
Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (aerobic scope, interest in measuring this trait has spread across disciplines in attempts to predict effects of climate change on fish populations. Here, various techniques used to elicit and measure MMR in different fish species with contrasting lifestyles are outlined and the relevance of MMR to the ecology, fitness and climate change resilience of fishes is discussed.
无
2008-01-01
Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public's attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A'nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578-1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.
Jacoby; GORDON
2008-01-01
Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public’s attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A’nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578―1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.
2010-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum and minimum allowable operating...
无
2011-01-01
[Objective] The research aimed to analyze temporal and spatial variation characteristics of temperature in Shangqiu City during 1961-2010.[Method] Based on temperature data in eight meteorological stations of Shangqiu during 1961-2010,by using trend analysis method,the temporal and spatial evolution characteristics of annual average temperature,annual average maximum and minimum temperatures,annual extreme maximum and minimum temperatures,daily range of annual average temperature in Shangqiu City were analy...
Y. Haseli
2016-05-01
Full Text Available The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov’s engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Observed Abrupt Changes in Minimum and Maximum Temperatures in Jordan in the 20th Century
Mohammad M. samdi
2006-01-01
Full Text Available This study examines changes in annual and seasonal mean (minimum and maximum temperatures variations in Jordan during the 20th century. The analyses focus on the time series records at the Amman Airport Meteorological (AAM station. The occurrence of abrupt changes and trends were examined using cumulative sum charts (CUSUM and bootstrapping and the Mann-Kendall rank test. Statistically significant abrupt changes and trends have been detected. Major change points in the mean minimum (night-time and mean maximum (day-time temperatures occurred in 1957 and 1967, respectively. A minor change point in the annual mean maximum temperature also occurred in 1954, which is essential agreement with the detected change in minimum temperature. The analysis showed a significant warming trend after the years 1957 and 1967 for the minimum and maximum temperatures, respectively. The analysis of maximum temperatures shows a significant warming trend after the year 1967 for the summer season with a rate of temperature increase of 0.038°C/year. The analysis of minimum temperatures shows a significant warming trend after the year 1957 for all seasons. Temperature and rainfall data from other stations in the country have been considered and showed similar changes.
Camarrone, Flavio; Ivanova, Anna; Decoster, Wivine; de Jong, Felix; van Hulle, Marc M
2015-01-01
To examine whether the minimum as well as the maximum voice intensity (i.e. sound pressure level, SPL) curves of a voice range profile (VRP) are required when discovering different voice groups based on a clustering analysis. In this approach, no a priori labeling of voice types is used. VRPs of 194 (84 male and 110 female) professional singers were registered and processed. Cluster analysis was performed with the use of features related to (1) both the maximum and minimum SPL curves and (2) the maximum SPL curve only. Features related to the maximum as well as the minimum SPL curves showed three clusters in both male and female voices. These clusters, or voice groups, are based on voice types with similar VRP features. However, when using features related only to the maximum SPL curve, the clusters became less obvious. Features related to the maximum and minimum SPL curves of a VRP are both needed in order to identify the three voice clusters. © 2016 S. Karger AG, Basel.
Challenging Minimum Deterrence: Articulating the Contemporary Relevance of Nuclear Weapons
2016-07-13
actually mean to interstate relations .4 Here, the arguments of modern advocates for minimum deterrence come into play. In a 2010 article advocating...recapitalization expenditures over the next couple of decades. This emphasis on the ICBM and SLBM legs as a constant existential deterrent does not mean ...elements of the US nuclear force gives this debate added meaning and urgency. One alternative currently under discus- sion is minimum deterrence. This
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)
2012-12-15
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Pan, Sudip; Solà, Miquel; Chattaraj, Pratim K
2013-02-28
Hardness and electrophilicity values for several molecules involved in different chemical reactions are calculated at various levels of theory and by using different basis sets. Effects of these aspects as well as different approximations to the calculation of those values vis-à-vis the validity of the maximum hardness and minimum electrophilicity principles are analyzed in the cases of some representative reactions. Among 101 studied exothermic reactions, 61.4% and 69.3% of the reactions are found to obey the maximum hardness and minimum electrophilicity principles, respectively, when hardness of products and reactants is expressed in terms of their geometric means. However, when we use arithmetic mean, the percentage reduces to some extent. When we express the hardness in terms of scaled hardness, the percentage obeying maximum hardness principle improves. We have observed that maximum hardness principle is more likely to fail in the cases of very hard species like F(-), H(2), CH(4), N(2), and OH appearing in the reactant side and in most cases of the association reactions. Most of the association reactions obey the minimum electrophilicity principle nicely. The best results (69.3%) for the maximum hardness and minimum electrophilicity principles reject the 50% null hypothesis at the 2% level of significance.
AlPOs Synthetic Factor Analysis Based on Maximum Weight and Minimum Redundancy Feature Selection
Yinghua Lv
2013-11-01
Full Text Available The relationship between synthetic factors and the resulting structures is critical for rational synthesis of zeolites and related microporous materials. In this paper, we develop a new feature selection method for synthetic factor analysis of (6,12-ring-containing microporous aluminophosphates (AlPOs. The proposed method is based on a maximum weight and minimum redundancy criterion. With the proposed method, we can select the feature subset in which the features are most relevant to the synthetic structure while the redundancy among these selected features is minimal. Based on the database of AlPO synthesis, we use (6,12-ring-containing AlPOs as the target class and incorporate 21 synthetic factors including gel composition, solvent and organic template to predict the formation of (6,12-ring-containing microporous aluminophosphates (AlPOs. From these 21 features, 12 selected features are deemed as the optimized features to distinguish (6,12-ring-containing AlPOs from other AlPOs without such rings. The prediction model achieves a classification accuracy rate of 91.12% using the optimal feature subset. Comprehensive experiments demonstrate the effectiveness of the proposed algorithm, and deep analysis is given for the synthetic factors selected by the proposed method.
Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds
Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)
2016-03-15
The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.
Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model
Yang, Yuefang; Gan, Chunhui; Shen, Tingting
2017-05-01
In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.
Govatski, J. A.; da Luz, M. G. E.; Koehler, M.
2015-01-01
We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.
Trends in Mean Annual Minimum and Maximum Near Surface Temperature in Nairobi City, Kenya
George Lukoye Makokha
2010-01-01
Full Text Available This paper examines the long-term urban modification of mean annual conditions of near surface temperature in Nairobi City. Data from four weather stations situated in Nairobi were collected from the Kenya Meteorological Department for the period from 1966 to 1999 inclusive. The data included mean annual maximum and minimum temperatures, and was first subjected to homogeneity test before analysis. Both linear regression and Mann-Kendall rank test were used to discern the mean annual trends. Results show that the change of temperature over the thirty-four years study period is higher for minimum temperature than maximum temperature. The warming trends began earlier and are more significant at the urban stations than is the case at the sub-urban stations, an indication of the spread of urbanisation from the built-up Central Business District (CBD to the suburbs. The established significant warming trends in minimum temperature, which are likely to reach higher proportions in future, pose serious challenges on climate and urban planning of the city. In particular the effect of increased minimum temperature on human physiological comfort, building and urban design, wind circulation and air pollution needs to be incorporated in future urban planning programmes of the city.
Weak minimum aberration and maximum number of clear two-factor interactions in 2
YANG; Guijun
2005-01-01
[1]Wu, C. F. J., Chen, Y., A graph-aided method for planning two-level experiments when certain interactions are important, Technometrics, 1992, 34: 162-175.[2]Fries, A., Hunter, W, G., Minimum aberration 2к-p designs, Technometrics, 1980, 22: 601-608.[3]Chen, H., Hedayat, A. S., 2n-l designs with weak minimum aberration, Ann. Statist., 1996, 24: 2536-2548.[4]Chen, J., Some results on 2n-к fractional factorial designs and search for minimum aberration designs, Ann.Statist., 1992, 20: 2124-2141.[5]Chen, J., Intelligent search for 213-6 and 214-7 minimum aberration designs, Statist. Sinica, 1998, 8: 1265-1270.[6]Chen, J., Sun, D. X., Wu, C. F. J., A catalogue of two-level and three-level fractional factorial designs with small runs, Internat. Statist. Rev., 1993, 61: 131-145.[7]Chen, J., Wu, C. F. J., Some results on 2n-к fractional factorial designs with minimum aberration or optimal moments, Ann. Statist., 1991, 19: 1028-1041.[8]Cheng, C. S., Mukerjee, R., Regular fractional factorial designs with minimum aberration and maximum estimation capacity, Ann. Statist., 1998, 26: 2289-2300.[9]Cheng, C. S., Steinberg, D. M., Sun, D. X., Minimum aberration and model robustness for two-level fractional factorial designs, J. Roy. Statist. Soc. Ser. B, 1999, 61: 85-93.[10]Draper, N. R., Lin, D. K. J., Capacity consideration for two-level fractional factorial designs, J. Statist. Plann.Inference, 1990, 24: 25-35.[11]Fang, K. T., Mukerjee, R., A connection between uniformity and aberration in regular fractions of two-level factorial, Biometrika, 2000, 87: 193-198.[12]Tang, B., Wu, C. F. J., Characterization of minimum aberration 2n-к designs in terms of their complementary designs, Ann. Statist., 1996, 24: 2549-2559.[13]Chen, H., Hedayat, A. S., 2n-m designs with resolution Ⅲ or Ⅳ containing clear two-factor interactions, J.Statist. Plann. Inference, 1998, 75: 147-158.[14]Tang, B., Ma, F., Ingram, D., Wang, H., Bounds on the maximum numbers of clear two factor
SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume
Gong, Y; Yu, J; Xiao, Y [Thomas Jefferson University Hospital, Philadelphia, PA (United States)
2015-06-15
Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant.
MARSpline model for lead seven-day maximum and minimum air temperature prediction in Chennai, India
K Ramesh; R Anitha
2014-06-01
In this study, a Multivariate Adaptive Regression Spline (MARS) based lead seven days minimum and maximum surface air temperature prediction system is modelled for station Chennai, India. To emphasize the effectiveness of the proposed system, comparison is made with the models created using statistical learning technique Support Vector Machine Regression (SVMr). The analysis highlights that prediction accuracy of MARS models for minimum temperature forecast are promising for short-term forecast (lead days 1 to 3) with mean absolute error (MAE) less than 1°C and the prediction efficiency and skill degrades in medium term forecast (lead days 4 to 7) with slightly above 1°C. The MAE of maximum temperature is little higher than minimum temperature forecast varying from 0.87°C for day-one to 1.27°C for lag day-seven with MARS approach. The statistical error analysis emphasizes that MARS models perform well with an average 0.2°C of reduction in MAE over SVMr models for all ahead seven days and provide significant guidance for the prediction of temperature event. The study also suggests that the correlation between the atmospheric parameters used as predictors and the temperature event decreases as the lag increases with both approaches.
Wu, Yating; Kuang, Bin; Wang, Tao; Zhang, Qianwu; Wang, Min
2015-12-01
This paper presents a minimum cost maximum flow (MCMF) based upstream bandwidth allocation algorithm, which supports differentiated QoS for orthogonal frequency division multiple access passive optical networks (OFDMA-PONs). We define a utility function as the metric to characterize the satisfaction degree of an ONU on the obtained bandwidth. The bandwidth allocation problem is then formulated as maximizing the sum of the weighted total utility functions of all ONUs. By constructing a flow network graph, we obtain the optimized bandwidth allocation using the MCMF algorithm. Simulation results show that the proposed scheme improves the performance in terms of mean packet delay, packet loss ratio and throughput.
Changes in atmospheric circulation between solar maximum and minimum conditions in winter and summer
Lee, Jae Nyung
2008-10-01
Statistically significant climate responses to the solar variability are found in Northern Annular Mode (NAM) and in the tropical circulation. This study is based on the statistical analysis of numerical simulations with ModelE version of the chemistry coupled Goddard Institute for Space Studies (GISS) general circulation model (GCM) and National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. The low frequency large scale variability of the winter and summer circulation is described by the NAM, the leading Empirical Orthogonal Function (EOF) of geopotential heights. The newly defined seasonal annular modes and its dynamical significance in the stratosphere and troposphere in the GISS ModelE is shown and compared with those in the NCEP/NCAR reanalysis. In the stratosphere, the summer NAM obtained from NCEP/NCAR reanalysis as well as from the ModelE simulations has the same sign throughout the northern hemisphere, but shows greater variability at low latitudes. The patterns in both analyses are consistent with the interpretation that low NAM conditions represent an enhancement of the seasonal difference between the summer and the annual averages of geopotential height, temperature and velocity distributions, while the reverse holds for high NAM conditions. Composite analysis of high and low NAM cases in both the model and observation suggests that the summer stratosphere is more "summer-like" when the solar activity is near a maximum. This means that the zonal easterly wind flow is stronger and the temperature is higher than normal. Thus increased irradiance favors a low summer NAM. A quantitative comparison of the anti-correlation between the NAM and the solar forcing is presented in the model and in the observation, both of which show lower/higher NAM index in solar maximum/minimum conditions. The summer NAM in the troposphere obtained from NCEP/NCAR reanalysis has a dipolar zonal structure with maximum
On the maximum and minimum of two modified Gamma-Gamma variates with applications
Al-Quwaiee, Hessa
2014-04-01
In this work, we derive the statistical characteristics of the maximum and the minimum of two modified1 Gamma-Gamma variates in closed-form in terms of Meijer\\'s G-function and the extended generalized bivariate Meijer\\'s G-function. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii) a dual-hop free-space optical relay transmission system. Computer-based Monte-Carlo simulations verify our new analytical results.
Realization of Minimum and Maximum Gate Function in Ta2O5-based Memristive Devices
Breuer, Thomas; Nielen, Lutz; Roesgen, Bernd; Waser, Rainer; Rana, Vikas; Linn, Eike
2016-04-01
Redox-based resistive switching devices (ReRAM) are considered key enablers for future non-volatile memory and logic applications. Functionally enhanced ReRAM devices could enable new hardware concepts, e.g. logic-in-memory or neuromorphic applications. In this work, we demonstrate the implementation of ReRAM-based fuzzy logic gates using Ta2O5 devices to enable analogous Minimum and Maximum operations. The realized gates consist of two anti-serially connected ReRAM cells offering two inputs and one output. The cells offer an endurance up to 106 cycles. By means of exemplary input signals, each gate functionality is verified and signal constraints are highlighted. This realization could improve the efficiency of analogous processing tasks such as sorting networks in the future.
Verification of surface minimum, mean, and maximum temperature forecasts in Calabria for summer 2008
S. Federico
2011-02-01
Full Text Available Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy, modelled by the Regional Atmospheric Modeling System (RAMS, have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council and are available online at http://meteo.crati.it/previsioni.html (every six hours. Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs. For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km.
Gridded analysis is based on Optimal Interpolation (OI and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields.
Two case studies, the first one with a low (less than the 10th percentile root mean square error (RMSE in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered
Asymptotic Behavior of the Maximum and Minimum Singular Value of Random Vandermonde Matrices
Tucci, Gabriel H
2012-01-01
This work examines various statistical distributions in connection with random Vandermonde matrices and their extension to $d$-dimensional phase distributions. Upper and lower bound asymptotics for the maximum singular value are found to be $O(\\log N^d)$ and $O(\\log N^{d} /\\log \\log N^d)$ respectively where $N$ is the dimension of the matrix, generalizing the results in \\cite{TW}. We further study the behavior of the minimum singular value of a random Vandermonde matrix. In particular, we prove that the minimum singular value is at most $N^2\\exp(-C\\sqrt{N}))$ where $N$ is the dimension of the matrix and $C$ is a constant. Furthermore, the value of the constant $C$ is determined explicitly. The main result is obtained in two different ways. One approach uses techniques from stochastic processes and in particular, a construction related with the Brownian bridge. The other one is a more direct analytical approach involving combinatorics and complex analysis. As a consequence, we obtain a lower bound for the maxi...
Examining the prey mass of terrestrial and aquatic carnivorous mammals: minimum, maximum and range.
Tucker, Marlee A; Rogers, Tracey L
2014-01-01
Predator-prey body mass relationships are a vital part of food webs across ecosystems and provide key information for predicting the susceptibility of carnivore populations to extinction. Despite this, there has been limited research on the minimum and maximum prey size of mammalian carnivores. Without information on large-scale patterns of prey mass, we limit our understanding of predation pressure, trophic cascades and susceptibility of carnivores to decreasing prey populations. The majority of studies that examine predator-prey body mass relationships focus on either a single or a subset of mammalian species, which limits the strength of our models as well as their broader application. We examine the relationship between predator body mass and the minimum, maximum and range of their prey's body mass across 108 mammalian carnivores, from weasels to baleen whales (Carnivora and Cetacea). We test whether mammals show a positive relationship between prey and predator body mass, as in reptiles and birds, as well as examine how environment (aquatic and terrestrial) and phylogenetic relatedness play a role in this relationship. We found that phylogenetic relatedness is a strong driver of predator-prey mass patterns in carnivorous mammals and accounts for a higher proportion of variance compared with the biological drivers of body mass and environment. We show a positive predator-prey body mass pattern for terrestrial mammals as found in reptiles and birds, but no relationship for aquatic mammals. Our results will benefit our understanding of trophic interactions, the susceptibility of carnivores to population declines and the role of carnivores within ecosystems.
National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...
Estimating minimum and maximum air temperature using MODIS data over Indo-Gangetic Plain
D B Shah; M R Pandya; H J Trivedi; A R Jani
2013-12-01
Spatially distributed air temperature data are required for climatological, hydrological and environmental studies. However, high spatial distribution patterns of air temperature are not available from meteorological stations due to its sparse network. The objective of this study was to estimate high spatial resolution minimum air temperature (min) and maximum air temperature (max) over the Indo-Gangetic Plain using Moderate Resolution Imaging Spectroradiometer (MODIS) data and India Meteorological Department (IMD) ground station data. min was estimated by establishing an empirical relationship between IMD min and night-time MODIS Land Surface Temperature (s). While, max was estimated using the Temperature-Vegetation Index (TVX) approach. The TVX approach is based on the linear relationship between s and Normalized Difference Vegetation Index (NDVI) data where max is estimated by extrapolating the NDVI-s regression line to maximum value of NDVImax for effective full vegetation cover. The present study also proposed a methodology to estimate NDVImax using IMD measured max for the Indo-Gangetic Plain. Comparison of MODIS estimated min with IMD measured min showed mean absolute error (MAE) of 1.73°C and a root mean square error (RMSE) of 2.2°C. Analysis in the study for max estimation showed that calibrated NDVImax performed well, with the MAE of 1.79°C and RMSE of 2.16°C.
Lussana, C.
2013-04-01
The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.
Uncertainties in transient projections of maximum and minimum flows over the United States
Giuntoli, Ignazio; Villarini, Gabriele; Prudhomme, Christel; Hannah, David M.
2016-04-01
Global multi-model ensemble experiments provide a valuable basis for the examination of potential future changes in runoff. However, these projections suffer from uncertainties that originate from different sources at different levels in the modelling chain. We present the partitioning of uncertainty into four distinct sources of projections of decadally-averaged annual maximum (AMax) and minimum (AMin) flows over the USA. More specifically, we quantify the relative contribution of the uncertainties arising from internal variability, global impact models (GIMs), global climate models (GCMs), and representative concentration pathways (RCPs). We use a set of nine state-of-the-art GIMs driven by five CMIP5 GCMs under four RCPs from the ISI-MIP multi-model ensemble. We examine the temporal changes in the relative contribution of each source of uncertainty over the course of the 21st century. Results show that GCMs and GIMs are responsible for the majority of uncertainty over most of the study area, followed by internal variability and RCPs. Proportions vary regionally and depend on the end of the runoff spectrum (AMax, AMin) considered. In particular, for AMax, large fractions of uncertainty are attributable to GCMs throughout the century with the GIMs increasing their share especially in mountainous and cold areas. For Amin, the contribution of GIMs to uncertainty increases with time, becoming the dominant source over most of the country by the end of the 21st century. Importantly, compared to the other sources, the RCPs contribution to uncertainty is negligible generally (for AMin especially). This finding indicates that the effects of different emission scenarios are barely noticeable in hydrological impact studies, while GIMs and GCMs make up most of the amplitude of the ensemble spread (uncertainty).
Rui A. P. Perdigão
2012-06-01
Full Text Available The application of the Maximum Entropy (ME principle leads to a minimum of the Mutual Information (MI, I(X,Y, between random variables X,Y, which is compatible with prescribed joint expectations and given ME marginal distributions. A sequence of sets of joint constraints leads to a hierarchy of lower MI bounds increasingly approaching the true MI. In particular, using standard bivariate Gaussian marginal distributions, it allows for the MI decomposition into two positive terms: the Gaussian MI (I_{g}, depending upon the Gaussian correlation or the correlation between ‘Gaussianized variables’, and a non‑Gaussian MI (I_{ng}, coinciding with joint negentropy and depending upon nonlinear correlations. Joint moments of a prescribed total order p are bounded within a compact set defined by Schwarz-like inequalities, where I_{ng} grows from zero at the ‘Gaussian manifold’ where moments are those of Gaussian distributions, towards infinity at the set’s boundary where a deterministic relationship holds. Sources of joint non-Gaussianity have been systematized by estimating I_{ng} between the input and output from a nonlinear synthetic channel contaminated by multiplicative and non-Gaussian additive noises for a full range of signal-to-noise ratio (snr variances. We have studied the effect of varying snr on I_{g} and I_{ng} under several signal/noise scenarios.
Climate change uncertainty for daily minimum and maximum temperatures: a model inter-comparison
Lobell, D; Bonfils, C; Duffy, P
2006-11-09
Several impacts of climate change may depend more on changes in mean daily minimum (T{sub min}) or maximum (T{sub max}) temperatures than daily averages. To evaluate uncertainties in these variables, we compared projections of T{sub min} and T{sub max} changes by 2046-2065 for 12 climate models under an A2 emission scenario. Average modeled changes in T{sub max} were slightly lower in most locations than T{sub min}, consistent with historical trends exhibiting a reduction in diurnal temperature ranges. However, while average changes in T{sub min} and T{sub max} were similar, the inter-model variability of T{sub min} and T{sub max} projections exhibited substantial differences. For example, inter-model standard deviations of June-August T{sub max} changes were more than 50% greater than for T{sub min} throughout much of North America, Europe, and Asia. Model differences in cloud changes, which exert relatively greater influence on T{sub max} during summer and T{sub min} during winter, were identified as the main source of uncertainty disparities. These results highlight the importance of considering separately projections for T{sub max} and T{sub min} when assessing climate change impacts, even in cases where average projected changes are similar. In addition, impacts that are most sensitive to summertime T{sub min} or wintertime T{sub max} may be more predictable than suggested by analyses using only projections of daily average temperatures.
Carlos A. L. Pires
2013-02-01
Full Text Available The Minimum Mutual Information (MinMI Principle provides the least committed, maximum-joint-entropy (ME inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values generated by constraining sets Tcr comprehended by mcr linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy are imposed by single morphisms of the original random variables. N-asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing Tcr leads to an increasing MinMI, converging eventually to the total MI. Under N-sized samples, the MinMI increment relative to two encapsulated sets Tcr1 ⊂ Tcr2 (with numbers of constraints mcr1
Weak minimum aberration and maximum number of clear two-factor interactions in 2m-p Ⅳ designs
YANG Guijun; LIU Minqian; ZHANG Runchu
2005-01-01
Both the clear effects and minimum aberration criteria are the important rules for the design selection. In this paper, it is proved that some 2m-p Ⅳ designs have weak minimum aberration, by considering the number of clear two-factor interactions in the designs.And some conditions are provided, under which a 2m-p Ⅳ design can have the maximum number of clear two-factor interactions and weak minimum aberration at the same time.Some weak minimum aberration 2m-p Ⅳ designs are provided for illustrations and two nonisomorphic weak minimum aberration 213-6 Ⅳ designs are constructed at the end of this paper.
OPTIMIZED FUEL INJECTOR DESIGN FOR MAXIMUM IN-FURNACE NOx REDUCTION AND MINIMUM UNBURNED CARBON
SAROFIM, A F; LISAUSKAS, R; RILEY, D; EDDINGS, E G; BROUWER, J; KLEWICKI, J P; DAVIS, K A; BOCKELIE, M J; HEAP, M P; PERSHING, D
1998-01-01
Reaction Engineering International (REI) has established a project team of experts to develop a technology for combustion systems which will minimize NO x emissions and minimize carbon in the fly ash. This much need technology will allow users to meet environmental compliance and produce a saleable by-product. This study is concerned with the NO x control technology of choice for pulverized coal fired boilers,"in-furnace NO_{x} control," which includes: staged low-NO_{x} burners, reburning, selective non-catalytic reduction (SNCR) and hybrid approaches (e.g., reburning with SNCR). The program has two primary objectives: 1) To improve the performance of "in-furnace" NO_{x} control, processes. 2) To devise new, or improve existing, approaches for maximum "in-furnace" NO_{x} control and minimum unburned carbon. The program involves: 1) fundamental studies at laboratory- and bench-scale to define NO reduction mechanisms in flames and reburning jets; 2) laboratory experiments and computer modeling to improve our two-phase mixing predictive capability; 3) evaluation of commercial low-NO_{x} burner fuel injectors to develop improved designs, and 4) demonstration of coal injectors for reburning and low-NO_{x} burners at commercial scale. The specific objectives of the two-phase program are to: 1 Conduct research to better understand the interaction of heterogeneous chemistry and two phase mixing on NO reduction processes in pulverized coal combustion. 2 Improve our ability to predict combusting coal jets by verifying two phase mixing models under conditions that simulate the near field of low-NO_{x} burners. 3 Determine the limits on NO control by in-furnace NO_{x} control technologies as a function of furnace design and coal type. 5 Develop and demonstrate improved coal injector designs for commercial low-NO_{x} burners and coal reburning systems. 6 Modify the char burnout model in REI's coal
The ancient Egyptian civilization: maximum and minimum in coincidence with solar activity
Shaltout, M.
It is proved from the last 22 years observations of the total solar irradiance (TSI) from space by artificial satellites, that TSI shows negative correlation with the solar activity (sunspots, flares, and 10.7cm Radio emissions) from day to day, but shows positive correlations with the same activity from year to year (on the base of the annual average for each of them). Also, the solar constant, which estimated fromth ground stations for beam solar radiations observations during the 20 century indicate coincidence with the phases of the 11- year cycles. It is known from sunspot observations (250 years) , and from C14 analysis, that there are another long-term cycles for the solar activity larger than 11-year cycle. The variability of the total solar irradiance affecting on the climate, and the Nile flooding, where there is a periodicities in the Nile flooding similar to that of solar activity, from the analysis of about 1300 years of the Nile level observations atth Cairo. The secular variations of the Nile levels, regularly measured from the 7 toth 15 century A.D., clearly correlate with the solar variations, which suggests evidence for solar influence on the climatic changes in the East African tropics The civilization of the ancient Egyptian was highly correlated with the Nile flooding , where the river Nile was and still yet, the source of the life in the Valley and Delta inside high dry desert area. The study depends on long -time historical data for Carbon 14 (more than five thousands years), and chronical scanning for all the elements of the ancient Egyptian civilization starting from the firs t dynasty to the twenty six dynasty. The result shows coincidence between the ancient Egyptian civilization and solar activity. For example, the period of pyramids building, which is one of the Brilliant periods, is corresponding to maximum solar activity, where the periods of occupation of Egypt by Foreign Peoples corresponding to minimum solar activity. The decline
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...
J C Joshi; Tankeshwar Kumar; Sunita Srivastava; Divya Sachdeva
2017-02-01
Maximum and minimum temperatures are used in avalanche forecasting models for snow avalanche hazard mitigation over Himalaya. The present work is a part of development of Hidden Markov Model (HMM) based avalanche forecasting system for Pir-Panjal and Great Himalayan mountain ranges of the Himalaya. In this work, HMMs have been developed for forecasting of maximum and minimum temperatures for Kanzalwan in Pir-Panjal range and Drass in Great Himalayan range with a lead time of two days. The HMMs have been developed using meteorological variables collected from these stations during the past 20 winters from 1992 to 2012. The meteorological variables have been used to define observations and states of the models and to compute model parameters (initial state, state transition and observation probabilities). The model parameters have been used in the Forward and the Viterbi algorithms to generate temperature forecasts. To improve the model forecasts, the model parameters have been optimised using Baum–Welch algorithm. The models have been compared with persistence forecast by root mean square errors (RMSE) analysis using independent data of two winters (2012–13, 2013–14). The HMM for maximum temperature has shown a 4–12% and 17–19% improvement in the forecast over persistence forecast, for day-1 and day-2, respectively. For minimum temperature, it has shown 6–38% and 5–12% improvement for day-1 and day-2, respectively.
Applying Tabu Heuristic to Wind Influenced, Minimum Risk and Maximum Expected Coverage Routes
1997-02-01
31 111.3. GENERAL VEHICLE ROUTING PROBLEM .......................................................... 33...r. L3. Background The vehicle routing problem with maximum coverage is an extension of the classical general vehicle routing problem (GVRP). A general...review from the literature for the general vehicle routing problem includes Eilon et al. (1971) and Bodin (1983). Within a traditional deterministic
The Round-Robin Mock Interview: Maximum Learning in Minimum Time
Marks, Melanie; O'Connor, Abigail H.
2006-01-01
Interview skills is critical to a job seeker's success in obtaining employment. However, learning interview skills takes time. This article offers an activity for providing students with interview practice while sacrificing only a single classroom period. The authors begin by reviewing relevant literature. Then, they outline the process of…
Kumar, Sanjay
2016-06-01
Present paper inspects the prediction capability of the latest version of the International Reference Ionosphere (IRI-2012) model in predicting the total electron content (TEC) over seven different equatorial regions across the globe during a very low solar activity phase 2009 and a high solar activity phase 2012. This has been carried out by comparing the ground-based Global Positioning System (GPS)-derived VTEC with those from the IRI-2012 model. The observed GPS-TEC shows the presence of winter anomaly which is prominent during the solar maximum year 2012 and disappeared during solar minimum year 2009. The monthly and seasonal mean of the IRI-2012 model TEC with IRI-NeQ topside has been compared with the GPS-TEC, and our results showed that the monthly and seasonal mean value of the IRI-2012 model overestimates the observed GPS-TEC at all the equatorial stations. The discrepancy (or over estimation) in the IRI-2012 model is found larger during solar maximum year 2012 than that during solar minimum year 2009. This is a contradiction to the results recently presented by Tariku (2015) over equatorial regions of Uganda. The discrepancy is found maximum during the December solstice and a minimum during the March equinox. The magnitude of discrepancy in the IRI-2012 model showed longitudinal dependent which maximized in western longitude sector during both the years 2009 and 2012. The significant discrepancy in the IRI-2012 model observed during the solar minimum year 2009 could be attributed to larger difference between F10.7 flux and EUV flux (26-34 nm) during low solar activity period 2007-2009 than that during high solar activity period 2010-2012. This suggests that to represent the solar activity impact in the IRI model, implementation of new solar activity indices is further required for its better performance.
Yatracos, Yannis G.
2013-01-01
The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...
Germec-Cakan, Derya; Taner, Tulin; Akan, Seden
2011-10-01
The aim of this study was to investigate upper respiratory airway dimensions in non-extraction and extraction subjects treated with minimum or maximum anchorage. Lateral cephalograms of 39 Class I subjects were divided into three groups (each containing 11 females and 2 males) according to treatment procedure: group 1, 13 patients treated with extraction of four premolars and minimum anchorage; group 2, 13 cases treated non-extraction with air-rotor stripping (ARS); and group 3, 13 bimaxillary protrusion subjects treated with extraction of four premolars and maximum anchorage. The mean ages of the patients were 18.1 ± 3.7, 17.8 ± 2.4, and 15.5 ± 0.88 years, respectively. Tongue, soft palate, hyoid position, and upper airway measurements were made on pre- and post-treatment lateral cephalograms and the differences between the mean measurements were tested using Wilcoxon signed-ranks test. Superior and middle airway space increased significantly (P extraction treatment using maximum anchorage has a reducing effect on the middle and inferior airway dimensions.
Liu Zhong-Bao
2016-06-01
Support Vector Machine (SVM) is one of the important stellar spectral classification methods, and it is widely used in practice. But its classification efficiencies cannot be greatly improved because it does not take the class distribution into consideration. In view of this, a modified SVM-named Minimum within-class and Maximum between-class scatter Support Vector Machine (MMSVM) is constructed to deal with the above problem. MMSVM merges the advantages of Fisher’s Discriminant Analysis (FDA) and SVM, and the comparative experiments on the Sloan Digital Sky Survey (SDSS) show that MMSVM performs better than SVM.
Zhong-Bao, Liu
2016-06-01
Support Vector Machine (SVM) is one of the important stellar spectral classification methods, and it is widely used in practice. But its classification efficiencies cannot be greatly improved because it does not take the class distribution into consideration. In view of this, a modified SVM named Minimum within-class and Maximum between-class scatter Support Vector Machine (MMSVM) is constructed to deal with the above problem. MMSVM merges the advantages of Fisher's Discriminant Analysis (FDA) and SVM, and the comparative experiments on the Sloan Digital Sky Survey (SDSS) show that MMSVM performs better than SVM.
Zhang, Yafei; Zhang, Fangqing; Chen, Guanghua
1994-12-01
It is proposed in this paper that the minimum substrate temperature for diamond growth from hydrogen-hydrocarbon gas mixtures be determined by the packing arrangements of hydrocarbon fragments at the surface, and the maximum substrate temperature be limited by the diamond growth surface reconstruction, which can be prevented by saturating the surface dangling bonds with atomic hydrogen. Theoretical calculations have been done by a formula proposed by Dryburgh [J. Crystal Growth 130 (1993) 305], and the results show that diamond can be deposited at the substrate temperatures ranging from ≈ 400 to ≈ 1200°C by low pressure chemical vapor deposition. This is consistent with experimental observations.
Aeronomical constraints to the minimum mass and maximum radius of hot low-mass planets
Fossati, L.; Erkaev, N. V.; Lammer, H.; Cubillos, P. E.; Odert, P.; Juvan, I.; Kislyakova, K. G.; Lendl, M.; Kubyshkina, D.; Bauer, S. J.
2017-02-01
Stimulated by the discovery of a number of close-in low-density planets, we generalise the Jeans escape parameter taking hydrodynamic and Roche lobe effects into account. We furthermore define Λ as the value of the Jeans escape parameter calculated at the observed planetary radius and mass for the planet's equilibrium temperature and considering atomic hydrogen, independently of the atmospheric temperature profile. We consider 5 and 10 M⊕ planets with an equilibrium temperature of 500 and 1000 K, orbiting early G-, K-, and M-type stars. Assuming a clear atmosphere and by comparing escape rates obtained from the energy-limited formula, which only accounts for the heating induced by the absorption of the high-energy stellar radiation, and from a hydrodynamic atmosphere code, which also accounts for the bolometric heating, we find that planets whose Λ is smaller than 15-35 lie in the "boil-off" regime, where the escape is driven by the atmospheric thermal energy and low planetary gravity. We find that the atmosphere of hot (i.e. Teq ⪆ 1000 K) low-mass (Mpl ⪅ 5 M⊕) planets with Λmass (Mpl ⪅ 10 M⊕) planets with Λmass and maximum radius and can be used to predict the presence of aerosols and/or constrain planetary masses, for example.
Santos W. N. dos
2003-01-01
Full Text Available The hot wire technique is considered to be an effective and accurate means of determining the thermal conductivity of ceramic materials. However, specifically for materials of high thermal diffusivity, the appropriate time interval to be considered in calculations is a decisive factor for getting accurate and consistent results. In this work, a numerical simulation model is proposed with the aim of determining the minimum and maximum measuring time for the hot wire parallel technique. The temperature profile generated by this model is in excellent agreement with that one experimentally obtained by this technique, where thermal conductivity, thermal diffusivity and specific heat are simultaneously determined from the same experimental temperature transient. Eighteen different specimens of refractory materials and polymers, with thermal diffusivities ranging from 1x10-7 to 70x10-7 m²/s, in shape of rectangular parallelepipeds, and with different dimensions were employed in the experimental programme. An empirical equation relating minimum and maximum measuring times and the thermal diffusivity of the sample is also obtained.
The maximum power efficiency 1-√τ: Research, education, and bibliometric relevance
Calvo Hernández, A.; Roco, J. M. M.; Medina, A.; Velasco, S.; Guzmán-Vargas, L.
2015-07-01
The well-known efficiency at maximum power for a cyclic system working between hot T h and low T c temperatures given by the equation 1-√ τ( τ= T c /T h), has become a landmark result with regards to the thermodynamic optimization of a great variety of energy converters. Its wide applicability and sole dependence on the external heat bath temperatures (as the Carnot efficiency does) allows for an easy comparison with experimental efficiencies leading to a striking fair agreement. Reversible, finite-time, and linear-irreversible derivations are analyzed in order to show a broader perspective about its meaning from both researching and pedagogical point of views. Its scientific relevance and historical development are also analyzed in this work by means of some bibliometric data. This article is supplemented with comments by Hong Qian and a final reply by the authors.
G. O. Walker
Full Text Available Median hourly, electron content-latitude profiles obtained in South East Asia under solar minimum and maximum conditions have been used to establish seasonal and solar differences in the diurnal variations of the ionospheric equatorial anomaly (EIA. The seasonal changes have been mainly accounted for from a consideration of the daytime meridional wind, affecting the EIA diffusion of ionization from the magnetic equator down the magnetic field lines towards the crests. Depending upon the seasonal location of the subsolar point in relation to the magnetic equator diffusion rates were increased or decreased. This led to crest asymmetries at the solstices with (1 the winter crest enhanced in the morning (increased diffusion rate and (2 the same crest decaying most rapidly in the late afternoon (faster recombination rate at lower ionospheric levels. Such asymmetries were also observed, to a lesser extent, at the equinoxes since the magnetic equator (located at about 9°N lat does not coincide with the geographic equator. Another factor affecting the magnitude of a particular electron content crest was the proximity of the subsolar point, since this increased the local ionization production rate. Enhancements of the EIA took place around sunset, mainly during the equinoxes and more frequently at solar maximum, and also there was evidence of apparent EIA crest resurgences around 0300 LST for all seasons at solar maximum. The latter are thought to be associated with the commonly observed, post-midnight, ionization enhancements at midlatitudes, ionization being transported to low latitudes by an equatorward wind. The ratio increases in crest peak electron contents from solar minimum to maximum of 2.7 at the equinoxes, 2.0 at the northern summer solstice and 1.7 at northern winter solstice can be explained, only partly, by increases in the magnitude of the eastward electric field E overhead the magnetic equator affecting the [
Minimum Grading, Maximum Learning
Carey, Theodore; Carifio, James
2011-01-01
Fair and effective schools should assign grades that align with clear and consistent evidence of student performance (Wormeli, 2006), but when a student's performance is inconsistent, traditional grading practices can prove inadequate. Understanding this, increasing numbers of schools have been experimenting with the practice of assigning minimum…
Maximum outreach. . . minimum budget
Laychak, Mary Beth
2011-06-01
Many astronomical institutions have budgetary constraints that prevent them from spending large amounts on public outreach. This is especially true for smaller organizations, such as the Canada-France-Hawaii Telescope (CFHT), where manpower and funding are at a premium. To maximize our impact, we employ unconventional and affordable outreach techniques that underscore our commitment to astronomy education and our local community. We participate in many unique community interactions, ranging from rodeo calf-dressing tournaments to art gallery exhibitions of CFHT images. Further, we have developed many creative methods to communicate complex astronomical concepts to both children and adults, including the use of a modified webcam to teach infrared astronomy and the production of online newsletter for parents, children, and educators. This presentation will discuss the outreach methods CFHT has found most effective in our local schools and our rural community.
S. Vignesh
2017-04-01
Full Text Available Flow based Erosion – corrosion problems are very common in fluid handling equipments such as propellers, impellers, pumps in warships, submarine. Though there are many coating materials available to combat erosion–corrosion damage in the above components, iron based amorphous coatings are considered to be more effective to combat erosion–corrosion problems. High velocity oxy-fuel (HVOF spray process is considered to be a better process to coat the iron based amorphous powders. In this investigation, iron based amorphous metallic coating was developed on 316 stainless steel substrate using HVOF spray technique. Empirical relationships were developed to predict the porosity and micro hardness of iron based amorphous coating incorporating HVOF spray parameters such as oxygen flow rate, fuel flow rate, powder feed rate, carrier gas flow rate, and spray distance. Response surface methodology (RSM was used to identify the optimal HVOF spray parameters to attain coating with minimum porosity and maximum hardness.
THE 2003 -2007 MINIMUM, MAXIMUM AND MEDIUM DISCHARGE ANALYSIS OF THE LATORIŢA-LOTRU WATER SYSTEM
Simona-Elena MIHĂESCU
2010-06-01
Full Text Available The 2003 -2007 minimum, maximum and medium discharge analysis of the Latoriţa-Lotru water system From a functional point of view, the Lotru and Latoriţa make up a water system by the junction of the two high hydro energetic potential water flows. The Lotru springs from the Parâng Massif with a spring quota of over 1900m and an outfall quota of 298m, which makes for an altitude difference of 1602m; it is the affluent of the Olt River, has a course length of 76 km and a minimum discharge of 20m3/s. Its reception hollow is of 1024 km2. Latoriţa springs from the Latoriţa Mountains, it is a small river with an average discharge of 2.7m3/s and is an affluent of the Lotru. Together, the two make up a high hydro energetic potential system, valorized in the system of lakes which serve the Ciunget Hydro-Electric Power Plant. Galbenu and Petrimanu are two reservoirs built on the Latoriţa River and on the Lotru, we have Vidra reservoir, Balindru, Mălaia and Brădişor. The discharge analysis of these rivers is very important in view of a good risk management, especially consisting in floods and high level waters, even in the case of artificial water flows such as the Latoriţa-Lotru water system.
On the relevance of the maximum entropy principle in non-equilibrium statistical mechanics
Auletta, Gennaro; Rondoni, Lamberto; Vulpiani, Angelo
2017-07-01
At first glance, the maximum entropy principle (MEP) apparently allows us to derive, or justify in a simple way, fundamental results of equilibrium statistical mechanics. Because of this, a school of thought considers the MEP as a powerful and elegant way to make predictions in physics and other disciplines, rather than a useful technical tool like others in statistical physics. From this point of view the MEP appears as an alternative and more general predictive method than the traditional ones of statistical physics. Actually, careful inspection shows that such a success is due to a series of fortunate facts that characterize the physics of equilibrium systems, but which are absent in situations not described by Hamiltonian dynamics, or generically in nonequilibrium phenomena. Here we discuss several important examples in non equilibrium statistical mechanics, in which the MEP leads to incorrect predictions, proving that it does not have a predictive nature. We conclude that, in these paradigmatic examples, an approach that uses a detailed analysis of the relevant aspects of the dynamics cannot be avoided.
Muin F. Ubeid
2012-12-01
Full Text Available The optical transmission properties of a structure consisting of N identical pairs of left- and right-materials are investigated theoretically and numerically. Maxwell's equations are used to determine the electric and magnetic fields of the incident waves at each layer. Snell's law is applied and the boundary conditions are imposed at each layer interface to calculate Fresnel coefficients. Expressions for reflectance and transmittance of the structure are given in terms of these coefficients. In the numerical results the transmittance of the structure is computed and illustrated as a function of frequency under different values of N. Minimum transmittance is achieved by using high and low opposite refractive indices of left and right materials of each pair of the structure. The frequency band of this transmittance is reduced by decreasing N. Maximum transmittance is demonstrated by using two slabs of the same width and opposite refractive indices placed between two dielectric media of the same kind. The effect of frequency and angle of incidence is very weak in these structures as compared to their all-dielectric counterparts. Moreover the obtained results are in agreement with the law of conservation of energy.
Jaagus, Jaak; Briede, Agrita; Rimkus, Egidijus; Remm, Kalle
2014-10-01
Spatial distribution and trends in mean and absolute maximum and minimum temperatures and in the diurnal temperature range were analysed at 47 stations in the eastern Baltic region (Lithuania, Latvia and Estonia) during 1951-2010. Dependence of the studied variables on geographical factors (latitude, the Baltic Sea, land elevation) is discussed. Statistically significant increasing trends in maximum and minimum temperatures were detected for March, April, July, August and annual values. At the majority of stations, the increase was detected also in February and May in case of maximum temperature and in January and May in case of minimum temperature. Warming was slightly higher in the northern part of the study area, i.e. in Estonia. Trends in the diurnal temperature range differ seasonally. The highest increasing trend revealed in April and, at some stations, also in May, July and August. Negative and mostly insignificant changes have occurred in January, February, March and June. The annual temperature range has not changed.
Wilson, Robert M.
2013-01-01
Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.
Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.
2002-01-01
laboratories comparable. The minimum and maximum concentrations of a selected isotope in naturally occurring terrestrial materials for selected chemical elements reviewed in this report are given below: Isotope Minimum mole fraction Maximum mole fraction -------------------------------------------------------------------------------- 2H 0 .000 0255 0 .000 1838 7Li 0 .9227 0 .9278 11B 0 .7961 0 .8107 13C 0 .009 629 0 .011 466 15N 0 .003 462 0 .004 210 18O 0 .001 875 0 .002 218 26Mg 0 .1099 0 .1103 30Si 0 .030 816 0 .031 023 34S 0 .0398 0 .0473 37Cl 0 .240 77 0 .243 56 44Ca 0 .020 82 0 .020 92 53Cr 0 .095 01 0 .095 53 56Fe 0 .917 42 0 .917 60 65Cu 0 .3066 0 .3102 205Tl 0 .704 72 0 .705 06 The numerical values above have uncertainties that depend upon the uncertainties of the determinations of the absolute isotope-abundance variations of reference materials of the elements. Because reference materials used for absolute isotope-abundance measurements have not been included in relative isotope abundance investigations of zinc, selenium, molybdenum, palladium, and tellurium, ranges in isotopic composition are not listed for these elements, although such ranges may be measurable with state-of-the-art mass spectrometry. This report is available at the url: http://pubs.water.usgs.gov/wri014222.
Er, Hale Çolakoğlu; Erden, Ayşe; Küçük, N Özlem; Geçim, Ethem
2014-01-01
The aim of this study was to retrospectively assess the correlation between minimum apparent diffusion coefficient (ADCmin) values obtained from diffusion-weighted magnetic resonance imaging (MRI) and maximum standardized uptake values (SUVmax) obtained from positron emission tomography-computed tomography (PET-CT) in rectal cancer. Forty-one patients with pathologically confirmed rectal adenocarcinoma were included in this study. For preoperative staging, PET-CT and pelvic MRI with diffusion-weighted imaging were performed within one week (mean time interval, 3±1 day). For ADC measurements, the region of interest (ROI) was manually drawn along the border of each hyperintense tumor on b=1000 s/mm2 images. After repeating this procedure on each consecutive tumor-containing slice to cover the entire tumoral area, ROIs were copied to ADC maps. ADCmin was determined as the lowest ADC value among all ROIs in each tumor. For SUVmax measurements, whole-body images were assessed visually on transaxial, sagittal, and coronal images. ROIs were determined from the lesions observed on each slice, and SUVmax values were calculated automatically. The mean values of ADCmin and SUVmax were compared using Spearman's test. The mean ADCmin was 0.62±0.19×10-3 mm2/s (range, 0.368-1.227×10-3 mm2/s), the mean SUVmax was 20.07±9.3 (range, 4.3-49.5). A significant negative correlation was found between ADCmin and SUVmax (r=-0.347; P = 0.026). There was a significant negative correlation between the ADCmin and SUVmax values in rectal adenocarcinomas.
Oberer, R.B.
2000-12-07
In an instrumented Cf-252 neutron source, it is desirable to distinguish fission events which produce neutrons from alpha decay events. A comparison of the maximum amplitude of a pulse from an alpha decay with the minimum amplitude of a fission pulse shows that the hemispherical configuration of the ion chamber is superior to the parallel-plate ion chamber.
2010-04-01
... homebuyer payment can a recipient charge a low-income rental tenant or homebuyer residing in housing units... Activities § 1000.124 What maximum and minimum rent or homebuyer payment can a recipient charge a low-income... charge a low-income rental tenant or homebuyer rent or homebuyer payments not to exceed 30 percent of...
Sa-Correia, I.; Van Uden, N.
1983-06-01
Difficulties experienced by brewers with yeast performance in the brewing of lager at low temperatures has led the authors to study the effect of ethanol on the minimum temperature for growth (T. min). It has been found that both the maximum temperature (T max) and T min were adversely affected by ethanol and that ethanol tolerance prevailed at intermediate temperatures. (Refs. 8).
Soldier-relevant body borne load impacts minimum foot clearance during obstacle negotiation.
Brown, T N; Loverro, K L; Schiffman, J M
2016-07-01
Soldiers often trip and fall on duty, resulting in injury. This study examined ten male soldiers' ability to negotiate an obstacle. Participants had lead and trail foot minimum foot clearance (MFC) parameters quantified while crossing a low (305 mm) and high (457 mm) obstacle with (19.4 kg) and without (6 kg) body borne load. To minimize tripping risk, participants increased lead foot MFC (p = 0.028) and reduced lead (p = 0.044) and trail (p = 0.035) foot variability when negotiating an obstacle with body borne load. While obstacle height had no effect on MFC (p = 0.273 and p = 0.126), placing the trail foot closer to the high obstacle when crossing with body borne load, resulted in greater lead (R = 0.640, b = 0.241, p = 0.046) and trail (R = 0.636, b = 0.287, p = 0.048) MFC. Soldiers, when carrying typical military loads, may be able to minimize their risk of tripping over an obstacle by creating a safety margin via greater foot clearance with reduced variability.
Kandaswamy, Krishna Kumar Umar
2013-01-01
The extracellular matrix (ECM) is a major component of tissues of multicellular organisms. It consists of secreted macromolecules, mainly polysaccharides and glycoproteins. Malfunctions of ECM proteins lead to severe disorders such as marfan syndrome, osteogenesis imperfecta, numerous chondrodysplasias, and skin diseases. In this work, we report a random forest approach, EcmPred, for the prediction of ECM proteins from protein sequences. EcmPred was trained on a dataset containing 300 ECM and 300 non-ECM and tested on a dataset containing 145 ECM and 4187 non-ECM proteins. EcmPred achieved 83% accuracy on the training and 77% on the test dataset. EcmPred predicted 15 out of 20 experimentally verified ECM proteins. By scanning the entire human proteome, we predicted novel ECM proteins validated with gene ontology and InterPro. The dataset and standalone version of the EcmPred software is available at http://www.inb.uni-luebeck.de/tools-demos/Extracellular_matrix_proteins/EcmPred. © 2012 Elsevier Ltd.
Kandaswamy, Krishna Kumar; Pugalenthi, Ganesan; Kalies, Kai-Uwe; Hartmann, Enno; Martinetz, Thomas
2013-01-21
The extracellular matrix (ECM) is a major component of tissues of multicellular organisms. It consists of secreted macromolecules, mainly polysaccharides and glycoproteins. Malfunctions of ECM proteins lead to severe disorders such as marfan syndrome, osteogenesis imperfecta, numerous chondrodysplasias, and skin diseases. In this work, we report a random forest approach, EcmPred, for the prediction of ECM proteins from protein sequences. EcmPred was trained on a dataset containing 300 ECM and 300 non-ECM and tested on a dataset containing 145 ECM and 4187 non-ECM proteins. EcmPred achieved 83% accuracy on the training and 77% on the test dataset. EcmPred predicted 15 out of 20 experimentally verified ECM proteins. By scanning the entire human proteome, we predicted novel ECM proteins validated with gene ontology and InterPro. The dataset and standalone version of the EcmPred software is available at http://www.inb.uni-luebeck.de/tools-demos/Extracellular_matrix_proteins/EcmPred.
Mesfin Dema
2014-05-01
Full Text Available We introduce a novel Maximum Entropy (MaxEnt framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects’ existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated.
Ngeow, Chow-Choong; Kanbur, Shashi M.; Bhardwaj, Anupam; Schrecengost, Zachariah; Singh, Harinder P.
2017-01-01
Investigation of period–color (PC) and amplitude–color (AC) relations at the maximum and minimum light can be used to probe the interaction of the hydrogen ionization front (HIF) with the photosphere and the radiation hydrodynamics of the outer envelopes of Cepheids and RR Lyraes. For example, theoretical calculations indicated that such interactions would occur at minimum light for RR Lyrae and result in a flatter PC relation. In the past, the PC and AC relations have been investigated by using either the (V ‑ R)MACHO or (V ‑ I) colors. In this work, we extend previous work to other bands by analyzing the RR Lyraes in the Sloan Digital Sky Survey Stripe 82 Region. Multi-epoch data are available for RR Lyraes located within the footprint of the Stripe 82 Region in five (ugriz) bands. We present the PC and AC relations at maximum and minimum light in four colors: (u ‑ g)0, (g ‑ r)0, (r ‑ i)0, and (i ‑ z)0, after they are corrected for extinction. We found that the PC and AC relations for this sample of RR Lyraes show a complex nature in the form of flat, linear or quadratic relations. Furthermore, the PC relations at minimum light for fundamental mode RR Lyrae stars are separated according to the Oosterhoff type, especially in the (g ‑ r)0 and (r ‑ i)0 colors. If only considering the results from linear regressions, our results are quantitatively consistent with the theory of HIF-photosphere interaction for both fundamental and first overtone RR Lyraes.
V R Durai; Rashmi Bhardwaj
2014-07-01
The output from Global Forecasting System (GFS) T574L64 operational at India Meteorological Department (IMD), New Delhi is used for obtaining location specific quantitative forecast of maximum and minimum temperatures over India in the medium range time scale. In this study, a statistical bias correction algorithm has been introduced to reduce the systematic bias in the 24–120 hour GFS model location specific forecast of maximum and minimum temperatures for 98 selected synoptic stations, representing different geographical regions of India. The statistical bias correction algorithm used for minimizing the bias of the next forecast is Decaying Weighted Mean (DWM), as it is suitable for small samples. The main objective of this study is to evaluate the skill of Direct Model Output (DMO) and Bias Corrected (BC) GFS for location specific forecast of maximum and minimum temperatures over India. The performance skill of 24–120 hour DMO and BC forecast of GFS model is evaluated for all the 98 synoptic stations during summer (May–August 2012) and winter (November 2012–February 2013) seasons using different statistical evaluation skill measures. The magnitude of Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for BC GFS forecast is lower than DMO during both summer and winter seasons. The BC GFS forecasts have higher skill score as compared to GFS DMO over most of the stations in all day-1 to day-5 forecasts during both summer and winter seasons. It is concluded from the study that the skill of GFS statistical BC forecast improves over the GFS DMO remarkably and hence can be used as an operational weather forecasting system for location specific forecast over India.
Saveljev, Vladimir; Kim, Sung-Kyu; Lee, Hyoung; Kim, Hyun-Woo; Lee, Byoungho
2016-02-08
The amplitude of the moiré patterns is estimated in relation to the opening ratio in line gratings and square grids. The theory is developed; the experimental measurements are performed. The minimum and the maximum of the amplitude are found. There is a good agreement between the theoretical and experimental data. This is additionally confirmed by the visual observation. The results can be applied to the image quality improvement in autostereoscopic 3D displays, to the measurements, and to the moiré displays.
Casati, Michele
2014-05-01
The global communication network and GPS satellites have enabled us to monitor for more than a decade, some of the more sensitive, well-known and highly urbanized volcanic areas around the world. The possibility of electromagnetic coupling between the dynamics of the Earth-Sun and major geophysical events is a topic of research. However the majority of researchers are orienting their research in one direction. They are attempting to demonstrate a significant EM coupling between the solar dynamics and terrestrial seismicity ignoring a possible relationship between solar dynamics and the dynamics inherent in volcanic calderas. The scientific references are scarce, however, a study conducted by the Vesuvius Observatory of Naples, notes that the seismic activity on the volcano is closely related to changes in solar activity and the Earth's magnetic field. We decided to extend the study to many other volcanic calderas in the world in order to generalise the relationship between solar activity and caldera activity and/or deformation of the ground. The list of Northern Hemisphere volcanoes examined is as follows: Long Valley, Yellowstone, Three sisters, Kilauea Hawaii, Axial seamount (United States); Augustine ( Alaska), Sakurajima (Japan); Hammarinn, Krisuvik; Askja (Iceland) and Campi Flegrei (Italy). We note that the deformation of volcanoes recorded in GPS logs varies in long, slow geodynamic processes related to the two well-known time periods within the eleven-year cycle of solar magnetic activity: the solar minimum and maximum. We find that the years of minimum (maximum), are coincident with the years in which transition between a phase of deflation (inflation) occurs. Additionally, the seismicity recorded in such areas reaches its peak in the years of solar minimum or maximum. However, the total number and magnitude of seismic events is greater during deep solar minima, than maxima, evidenced by increased seismic activity occurring between 2006 and 2010. This
MAXIMUM DISCLOSURE WITH MINIMUM DELAY
J Van R. du Preez
2012-02-01
Full Text Available In his treatment of the subject 'Die SA Weermag moet ook sy ander wapens effektief aanwend' in 7/1 issue of Militaria Colonel W. Otto regards it as incumbent on the South African Defence Force to make effective use of propaganda (in my book the corruption of the channels of communication.
Albuquerque, Fabio; Beier, Paul
2015-01-01
Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.
Al-Quwaiee, Hessa
2016-01-07
In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.
Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)
2013-10-15
The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across
Leonardo W. T. Silva
2014-08-01
Full Text Available In launching operations, Rocket Tracking Systems (RTS process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs with phased arrays (PAs. These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs, the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs. For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence.
Sabitha Gauni
2014-03-01
Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.
Jeng, K-S; Huang, C-C; Lin, C-K; Lin, C-C; Chen, K-H
2013-06-01
Early detection of Budd-Chiari syndrome (BCS) to give the appropriate therapy in time is crucial. Angiography remains the golden standard to diagnose BCS. However, to establish the diagnosis of BCS in complicated cirrhotic patients remains a challenge. We used maximum intensity projection (Max IP) and minimum intensity projection (Min IP) from computed tomographic (CT) images to detect this syndrome in such a patient. A 55-year-old man with a history of chronic hepatitis B infection and alcoholism had undergone previously a left lateral segmentectomy for hepatic epitheloid angiomyolipoma (4.6 × 3.5 × 3.3 cm) with a concomitant splenectomy. Liver decompensation with intractable ascites and jaundice occurred 4 months later. The reformed images of the venous phase of enhanced CT images with Max IP and Min IP showed middle hepatic vein thrombosis. He then underwent a living-related donor liver transplantation with a right liver graft from his daughter. Intraoperatively, we noted thrombosis of his middle hepatic vein protruding into inferior vena cava. The postoperative course was unevenful. Microscopic findings revealed micronodular cirrhosis with mixed inflammation in the portal areas. Some liver lobules exhibited congestion and sinusoidal dilation compatible with venous occlusion clinically. We recommend Max IP and Min IP of CT images as simple and effective techniques to establish the diagnosis of BCS, especially in complicated cirrhotic patients, thereby avoiding invasive interventional procedures. Copyright © 2013 Elsevier Inc. All rights reserved.
Network float model of network planning and its maximum and minimum%网络计划的网络时差模型及其最值
刘琳; 李俊; 吴轶群
2011-01-01
Network float of network planning denotes sum of floats which could be consumed practically by each activity,and isn＇t the simple sum of floats in theory.Network float determines sum of the maximal reachable durations of all activities in pracrice under condition of fixed total duration,and is correlative close to cost of project engineering.The network float is variable,and is correlative to time parameters of each activity.It illuminates that value of the float could be decided by adjusting time parameters of activities,and then realizes optimization of cost;but variational extent of the float has not been confirmed now.In this paper,firstly,meanings of network float is analyzed from new visual angle;secondly,model of computing network float is founded base on above meanings,variational extent of the float viz.maximum and minimum are confirmed,and time parameters of each activity which should be satisfied for obtaining the maximun and minimum of network float are also confirmed;finally,case study is used to expound the application.%网络计划的网络时差表示各工序实际可用的机动时间的总和（不是理论上机动时间的简单加总）,即CPM网络计划的总机动时间,它决定着在总工期不变的前提下,所有工序实际可以达到的最大工期的总和,与工程项目的成本密切相关.网络时差是变量,与各工序的时间参数相关,说明可以通过调整工序的时间参数决定该时差的取值,进而实现成本优化,但目前还无法确定其变动幅度.首先从新的角度分析了网络时差的含义;然后,在此基础上建立了求解网络时差的模型,确定了网络时差的变动范围（即确定其最大值和最小值）,以及为了实现网络时差的最值各工序需满足的时间参数值;最后,通过算例阐述其应用.
Sürer Budak, Evrim; Toptaş, Tayfun; Aydın, Funda; Öner, Ali Ozan; Çevikol, Can; Şimşek, Tayup
2017-02-05
To explore the correlation of the primary tumor's maximum standardized uptake value (SUVmax) and minimum apparent diffusion coefficient (ADCmin) with clinicopathologic features, and to determine their predictive power in endometrial cancer (EC). A total of 45 patients who had undergone staging surgery after a preoperative evaluation with (18)F-fluorodeoxyglucose (FDG) positron emission tomography/computerized tomography (PET/CT) and diffusion-weighted magnetic resonance imaging (DW-MRI) were included in a prospective case-series study with planned data collection. Multiple linear regression analysis was used to determine the correlations between the study variables. The mean ADCmin and SUVmax values were determined as 0.72±0.22 and 16.54±8.73, respectively. A univariate analysis identified age, myometrial invasion (MI) and lymphovascular space involvement (LVSI) as the potential factors associated with ADCmin while it identified age, stage, tumor size, MI, LVSI and number of metastatic lymph nodes as the potential variables correlated to SUVmax. In multivariate analysis, on the other hand, MI was the only significant variable that correlated with ADCmin (p=0.007) and SUVmax (p=0.024). Deep MI was best predicted by an ADCmin cutoff value of ≤0.77 [93.7% sensitivity, 48.2% specificity, and 93.0% negative predictive value (NPV)] and SUVmax cutoff value of >20.5 (62.5% sensitivity, 86.2% specificity, and 81.0% NPV); however, the two diagnostic tests were not significantly different (p=0.266). Among clinicopathologic features, only MI was independently correlated with SUVmax and ADCmin. However, the routine use of (18)F-FDG PET/CT or DW-MRI cannot be recommended at the moment due to less than ideal predictive performances of both parameters.
Maher Brion S
2005-12-01
Full Text Available Abstract In order to detect linkage of the simulated complex disease Kofendrerd Personality Disorder across studies from multiple populations, we performed a genome scan meta-analysis (GSMA. Using the 7-cM microsatellite map, nonparametric multipoint linkage analyses were performed separately on each of the four simulated populations independently to determine p-values. The genome of each population was divided into 20-cM bin regions, and each bin was rank-ordered based on the most significant linkage p-value for that population in that region. The bin ranks were then averaged across all four studies to determine the most significant 20-cM regions over all studies. Statistical significance of the averaged bin ranks was determined from a normal distribution of randomly assigned rank averages. To narrow the region of interest for fine-mapping, the meta-analysis was repeated two additional times, with each of the 20-cM bins offset by 7 cM and 13 cM, respectively, creating regions of overlap with the original method. The 6–7 cM shared regions, where the highest averaged 20-cM bins from each of the three offsets overlap, designated the minimum region of maximum significance (MRMS. Application of the GSMA-MRMS method revealed genome wide significance (p-values refer to the average rank assigned to the bin at regions including or adjacent to all of the simulated disease loci: chromosome 1 (p p-value p-value p-value
Phan Thanh Noi
2016-12-01
Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.
Liu Jin
2012-01-01
Full Text Available Abstract Background To evaluate the accuracy of the combined maximum and minimum intensity projection-based internal target volume (ITV delineation in 4-dimensional (4D CT scans for liver malignancies. Methods 4D CT with synchronized IV contrast data were acquired from 15 liver cancer patients (4 hepatocellular carcinomas; 11 hepatic metastases. We used five approaches to determine ITVs: (1. ITVAllPhases: contouring gross tumor volume (GTV on each of 10 respiratory phases of 4D CT data set and combining these GTVs; (2. ITV2Phase: contouring GTV on CT of the peak inhale phase (0% phase and the peak exhale phase (50% and then combining the two; (3. ITVMIP: contouring GTV on MIP with modifications based on physician's visual verification of contours in each respiratory phase; (4. ITVMinIP: contouring GTV on MinIP with modification by physician; (5. ITV2M: combining ITVMIP and ITVMinIP. ITVAllPhases was taken as the reference ITV, and the metrics used for comparison were: matching index (MI, under- and over-estimated volume (Vunder and Vover. Results 4D CT images were successfully acquired from 15 patients and tumor margins were clearly discernable in all patients. There were 9 cases of low density and 6, mixed on CT images. After comparisons of metrics, the tool of ITV2M was the most appropriate to contour ITV for liver malignancies with the highest MI of 0.93 ± 0.04 and the lowest proportion of Vunder (0.07 ± 0.04. Moreover, tumor volume, target motion three-dimensionally and ratio of tumor vertical diameter over tumor motion magnitude in cranio-caudal direction did not significantly influence the values of MI and proportion of Vunder. Conclusion The tool of ITV2M is recommended as a reliable method for generating ITVs from 4D CT data sets in liver cancer.
Cooper, Margaret E; Goldstein, Toby H; Maher, Brion S; Marazita, Mary L
2005-12-30
In order to detect linkage of the simulated complex disease Kofendrerd Personality Disorder across studies from multiple populations, we performed a genome scan meta-analysis (GSMA). Using the 7-cM microsatellite map, nonparametric multipoint linkage analyses were performed separately on each of the four simulated populations independently to determine p-values. The genome of each population was divided into 20-cM bin regions, and each bin was rank-ordered based on the most significant linkage p-value for that population in that region. The bin ranks were then averaged across all four studies to determine the most significant 20-cM regions over all studies. Statistical significance of the averaged bin ranks was determined from a normal distribution of randomly assigned rank averages. To narrow the region of interest for fine-mapping, the meta-analysis was repeated two additional times, with each of the 20-cM bins offset by 7 cM and 13 cM, respectively, creating regions of overlap with the original method. The 6-7 cM shared regions, where the highest averaged 20-cM bins from each of the three offsets overlap, designated the minimum region of maximum significance (MRMS). Application of the GSMA-MRMS method revealed genome wide significance (p-values refer to the average rank assigned to the bin) at regions including or adjacent to all of the simulated disease loci: chromosome 1 (p value value value < 0.05 for 7-14 cM, the region adjacent to D4). This GSMA analysis approach demonstrates the power of linkage meta-analysis to detect multiple genes simultaneously for a complex disorder. The MRMS method enhances this powerful tool to focus on more localized regions of linkage.
Shen, Tengming [Fermilab; Ye, Liyang [NCSU, Raleigh; Turrioni, Daniele [Fermilab; Li, Pei [Fermilab
2015-01-01
Small insert coils have been built using a multifilamentary Bi2Sr2CaCu2Ox round wire, and characterized in background fields to explore the quench behaviors and limits of Bi2Sr2CaCu2Ox superconducting magnets, with an emphasis on assessing the impact of slow normal zone propagation on quench detection. Using heaters of various lengths to initiate a small normal zone, a coil was quenched safely more than 70 times without degradation, with the maximum coil temperature reaching 280 K. Coils withstood a resistive voltage of tens of mV for seconds without quenching, showing the high stability of these coils and suggesting that the quench detection voltage shall be greater than 50 mV to not to falsely trigger protection. The hot spot temperature for the resistive voltage of the normal zone to reach 100 mV increases from ~40 K to ~80 K with increasing the operating wire current density Jo from 89 A/mm2 to 354 A/mm2 whereas for the voltage to reach 1 V, it increases from ~60 K to ~140 K, showing the increasing negative impact of slow normal zone propagation on quench detection with increasing Jo and the need to limit the quench detection voltage to < 1 V. These measurements, coupled with an analytical quench model, were used to access the impact of the maximum allowable voltage and temperature upon quench detection on the quench protection, assuming to limit the hot spot temperature to <300 K.
太阳活动极大和极小期太阳磁场周期变化研究%Periodicity Analysis of SMMF in Solar Maximum and Minimum
叶妮; 祝凤荣; 周雪梅; 贾焕玉
2012-01-01
Using the data observed by Wilcox Solar Observatory from 1975 to 2010, the short periodicity of the solar mean magnetic field (SMMF)in solar maximum and minimum is analyzed. The result shows that SMMF has main periods about 9 days, 13.5 days, and 27 days. During the solar maximum, the SMMF has the most dominant period near 27 days. However, in solar minimum, the 13.5-day periodicity is most significant except in 1984-1986. These results show that the solar active region distribution in the solar maximum is quite different from that in solar minimum.%利用Wilcox天文台1975年到2010年间的太阳磁场数据,分析了太阳平均磁场在太阳活动极大和极小时期的短时周期性.结果显示太阳磁场主要具有9d、13.5d、27d左右的周期.在太阳活动极大时期, 27d左右周期最为显著,而在太阳活动极小时期最显著的周期为13.5d左右(1984～1986年间的太阳活动极小时期除外).这些结果说明太阳的活动区域在活动极大和极小时期具有明显不同的分布.
Beloglazov, M. I.; Akhmetov, O. I.
2010-12-01
On the basis of the two-component measurements of the atmospheric noise electromagnetic field on the Kola Peninsula, a change in the first Schumann resonance (SR-1) as an indicator of global lightning formation is studied depending on the level of galactic cosmic rays (GCRs). It is found that the effect of GCRs is most evident during five months: in January and from September to December; in this case the SR-1 intensity in 2001 was higher than the level of 2007 by a factor of 1.5 and more. This effect almost disappears when the regime of the Northern Hemisphere changes into the summer regime. It is assumed that an increase in the GCR intensity results in an increase in the lightning occurrence frequency; however, the probability that the power of each lightning stroke decreases owing to an early disruption of the charge separation and accumulation processes in a thundercloud increases; on the contrary, a decrease in the GCR intensity decreases lightning stroke occurrence frequency and simultaneously increases the probability of accumulating a higher energy by a thundercloud and increasing the lightning power to the maximum possible values.
Ali Arkamose Assani
2016-10-01
Full Text Available Various manmade features (diversions, dredging, regulation, etc. have affected water levels in the Great Lakes and their outlets since the 19th century. The goal of this study is to analyze the impacts of such features on the stationarity and dependence between monthly mean maximum and minimum water levels in the Great Lakes and St. Lawrence River from 1919 to 2012. As far as stationarity is concerned, the Lombard method brought out shifts in mean and variance values of monthly mean water levels in Lake Ontario and the St. Lawrence River related to regulation of these waterbodies in the wake of the digging of the St. Lawrence Seaway in the mid-1950s. Water level shifts in the other lakes are linked to climate variability. As for the dependence between water levels, the copula method revealed a change in dependence mainly between Lakes Erie and Ontario following regulation of monthly mean maximum and minimum water levels in the latter. The impacts of manmade features primarily affected the temporal variability of monthly mean water levels in Lake Ontario.
Kremser, S.; Bodeker, G. E.; Lewis, J.
2014-01-01
A Climate Pattern-Scaling Model (CPSM) that simulates global patterns of climate change, for a prescribed emissions scenario, is described. A CPSM works by quantitatively establishing the statistical relationship between a climate variable at a specific location (e.g. daily maximum surface temperature, Tmax) and one or more predictor time series (e.g. global mean surface temperature, Tglobal) - referred to as the "training" of the CPSM. This training uses a regression model to derive fit coefficients that describe the statistical relationship between the predictor time series and the target climate variable time series. Once that relationship has been determined, and given the predictor time series for any greenhouse gas (GHG) emissions scenario, the change in the climate variable of interest can be reconstructed - referred to as the "application" of the CPSM. The advantage of using a CPSM rather than a typical atmosphere-ocean global climate model (AOGCM) is that the predictor time series required by the CPSM can usually be generated quickly using a simple climate model (SCM) for any prescribed GHG emissions scenario and then applied to generate global fields of the climate variable of interest. The training can be performed either on historical measurements or on output from an AOGCM. Using model output from 21st century simulations has the advantage that the climate change signal is more pronounced than in historical data and therefore a more robust statistical relationship is obtained. The disadvantage of using AOGCM output is that the CPSM training might be compromised by any AOGCM inadequacies. For the purposes of exploring the various methodological aspects of the CPSM approach, AOGCM output was used in this study to train the CPSM. These investigations of the CPSM methodology focus on monthly mean fields of daily temperature extremes (Tmax and Tmin). The methodological aspects of the CPSM explored in this study include (1) investigation of the advantage
Ohnaka, K.; Weigelt, G.; Hofmann, K.-H.
2017-01-01
Aims: Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at 2 R⋆. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) as well as high-spectral resolution long-baseline interferometric observations with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). Methods: We observed W Hya with VLT/SPHERE-ZIMPOL at three wavelengths in the continuum (645, 748, and 820 nm), in the Hα line at 656.3 nm, and in the TiO band at 717 nm. The VLTI/AMBER observations were carried out in the wavelength region of the CO first overtone lines near 2.3 μm with a spectral resolution of 12 000. Results: The high-spatial resolution polarimetric images obtained with SPHERE-ZIMPOL have allowed us to detect clear time variations in the clumpy dust clouds as close as 34-50 mas (1.4-2.0 R⋆) to the star. We detected the formation of a new dust cloud as well as the disappearance of one of the dust clouds detected at the first epoch. The Hα and TiO emission extends to 150 mas ( 6 R⋆), and the Hα images obtained at two epochs reveal time variations. The degree of linear polarization measured at minimum light, which ranges from 13 to 18%, is higher than that observed at pre-maximum light. The power-law-type limb-darkened disk fit to the AMBER data in the continuum results in a limb-darkened disk diameter of 49.1 ± 1.5 mas and a limb-darkening parameter of 1.16 ± 0.49, indicating that the atmosphere is more extended with weaker limb-darkening compared to pre-maximum light. Our Monte Carlo radiative transfer modeling shows that the second-epoch SPHERE-ZIMPOL data can be explained by a shell of 0.1 μm grains of Al2O3, Mg2SiO4, and MgSiO3 with a 550 nm optical depth of 0.6 ± 0.2 and an inner and outer radii of 1.3 R⋆ and 10 ± 2R⋆, respectively. Our modeling suggests the predominance of small (0
Svendsen, Jon C; Tirsgaard, Bjørn; Cordero, Gerardo A; Steffensen, John F
2015-01-01
Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata), both axial swimmers, this study tested four hypotheses: (1) gait transition from steady to unsteady (i.e., burst-assisted) swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC); (2) variation in swimming performance (critical swimming speed; U crit) correlates with metabolic scope (MS) or anaerobic capacity (i.e., maximum EPOC); (3) there is a trade-off between maximum sustained swimming speed (U sus) and minimum cost of transport (COTmin); and (4) variation in U sus correlates positively with optimum swimming speed (U opt; i.e., the speed that minimizes energy expenditure per unit of distance traveled). Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e., EPOC) increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg(-1). Data are consistent with a previous study on striped surfperch (Embiotoca lateralis), a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between U crit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced U crit. We found no evidence of a trade-off between U sus and COTmin. In fact, data revealed significant negative correlations between U sus and COTmin, suggesting that individuals with high U sus also exhibit low COTmin. Finally, there were positive correlations between U sus and U opt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming
Jon Christian Svendsen
2015-02-01
Full Text Available Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata and Trinidadian guppy (Poecilia reticulata, both axial swimmers, this study tested four hypotheses: 1 gait transition from steady to unsteady (i.e. burst-assisted swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC; 2 variation in swimming performance (critical swimming speed; Ucrit correlates with metabolic scope (MS or anaerobic capacity (i.e. maximum EPOC; 3 there is a trade-off between maximum sustained swimming speed (Usus and minimum cost of transport (COTmin; and 4 variation in Usus correlates positively with optimum swimming speed (Uopt; i.e. the speed that minimizes energy expenditure per unit of distance travelled. Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e. EPOC increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg-1. Data are consistent with a previous study on striped surfperch (Embiotoca lateralis, a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between Ucrit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced Ucrit. We found no evidence of a trade-off between Usus and COTmin. In fact, data revealed significant negative correlations between Usus and COTmin, suggesting that individuals with high Usus also exhibit low COTmin. Finally, there were positive correlations between Usus and Uopt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming economy and optimum
Svendsen, Jon C.; Tirsgaard, Bjørn; Cordero, Gerardo A.; Steffensen, John F.
2015-01-01
Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata), both axial swimmers, this study tested four hypotheses: (1) gait transition from steady to unsteady (i.e., burst-assisted) swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC); (2) variation in swimming performance (critical swimming speed; Ucrit) correlates with metabolic scope (MS) or anaerobic capacity (i.e., maximum EPOC); (3) there is a trade-off between maximum sustained swimming speed (Usus) and minimum cost of transport (COTmin); and (4) variation in Usus correlates positively with optimum swimming speed (Uopt; i.e., the speed that minimizes energy expenditure per unit of distance traveled). Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e., EPOC) increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg−1. Data are consistent with a previous study on striped surfperch (Embiotoca lateralis), a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between Ucrit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced Ucrit. We found no evidence of a trade-off between Usus and COTmin. In fact, data revealed significant negative correlations between Usus and COTmin, suggesting that individuals with high Usus also exhibit low COTmin. Finally, there were positive correlations between Usus and Uopt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming economy and
Ohnaka, Keiichi; Hofmann, Karl-Heinz
2016-01-01
Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at ~2 Rstar. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) in the continuum (645, 748, and 820 nm), in the Halpha line (656.3 nm), and in the TiO band (717 nm) as well as high-spectral resolution long-baseline interferometric observations in 2.3 micron CO lines with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). The high-spatial resolution polarimetric images have allowed us to detect clear time variations in the clumpy dust clouds as close as 34--50~mas (1.4--2.0 Rstar) to the star. We detected the formation of a new dust cloud and the disappearance of one of the dust clouds detected at the first epoch. The Halpha and TiO emission extends to ~150 mas (~6 Rstar), and the Halpha images reveal time variations. The degree of linear polarization is higher at mi...
Alp, Sehnaz; Sancak, Banu; Hascelik, Gulsen; Arikan, Sevtap
2010-11-01
We investigated the incidence of trailing growth with fluconazole in 101 clinical Candida isolates (49 C. albicans and 52 C. tropicalis) and tried to establish the convenient susceptibility testing method and medium for fluconazole minimum inhibitory concentration (MIC) determination. MICs were determined by CLSI M27-A2 broth microdilution (BMD) and Etest methods on RPMI-1640 agar supplemented with 2% glucose (RPG) and on Mueller-Hinton agar supplemented with 2% glucose and 0.5 μg ml(-1) methylene blue (GMB). BMD and Etest MICs were read at 24 and 48 h, and susceptibility categories were compared. All isolates were determined as susceptible with BMD, Etest-RPG and Etest-GMB at 24 h. While all isolates were interpreted as susceptible at 48 h on Etest-RPG and Etest-GMB, one C. albicans isolate was interpreted as susceptible-dose dependent (S-DD) and two C. tropicalis isolates were interpreted as resistant with BMD. On Etest-RPG, trailing growth caused widespread microcolonies within the inhibition zone and resulted in confusion in MIC determination. On Etest-GMB, because of the nearly absence of microcolonies within the zone of inhibition, MICs were evaluated more easily. We conclude that, for the determination of fluconazole MICs of trailing Candida isolates, the Etest method has an advantage over BMD and can be used along with this reference method. Moreover, GMB appears more beneficial than RPG for the fluconazole Etest. © 2009 Blackwell Verlag GmbH.
Rozanov, E. V.; Schlesinger, M. E.; Egorova, T. A.; Li, B.; Andronova, N.; Zubov, V. A.
2004-01-01
The University of Illinois at Urbana-Champaign general circulation model with interactive photochemistry has been applied to estimate the changes in ozone, temperature and dynamics caused by the observed enhancement of the solar ultraviolet radiation during the 11-year solar activity cycle. Two 15-yearlong runs with spectral solar UV fluxes for the minimum and maximum solar activity cases have been performed. It was obtained that due to the imposed changes in spectral solar UV fluxes the annual-mean ozone mixing ratio increases 3% over the southern middle latitudes in the upper stratosphere and 2% in the northern lower stratosphere. The model also shows a statistically significant warming of 1.2 K in the stratosphere and an acceleration of the polar-night jets in both hemispheres. The most pronounced changes were found in November and March over the Northern Hemisphere and in September-October over the Southern Hemisphere. The magnitude and seasonal behavior of the simulated changes resemble the most robust features of the solar signal obtained from observational data analysis; however, they do not exactly coincide. The simulated zonal wind and temperature response during late fall to early spring contains the observed downward and poleward propagation of the solar signal, however its structure and phase are different from those observed. The response of the surface air temperature in December consists of warming over northern Europe, USA, and eastern Russia, and cooling over Greenland, Alaska, and central Asia. This pattern resembles the changes of the surface winter temperature after a major volcanic eruption. Model results for September-October show an intensification of ozone loss by up to 10% and expansion of the "ozone hole" toward South America.
雷苏娇; 李俊; 吴海博; 冯宗明
2014-01-01
Multipath routing is a new feature in CCN (Content-Centric Networking) which can be used to enhance the efifciency of network resources usage and balance network congestion. Based on the minimum cost maximum flow theory, we propose a multipath routing algorithm which aims to minimize delay and maximize bandwidth. It can choose different routing paths automatically according to the difference of network bandwidth environment and delay between links to achieve optimal bandwidth utilization of the entire network. The simulation experiment shows that our algorithm can reduce packet loss rate, decrease the bottleneck link load by approximately 60%, and alleviate network congestion.%在CCN (Content-Centric Networking，内容中心网络)中，多路径路由是一个新的特性，采用多路径路由可以更高效地利用网络资源，平衡网络拥塞。本文基于最小费用最大流理论，提出了一种适用于CCN网络的最小时延最大带宽多路径路由算法。该算法可以根据网络链路的带宽差异和链路的时延来选择不同的路由路径,达到整个网络的带宽最优利用。仿真实验表明，该算法与最短路径算法相比可以减少网络丢包，将瓶颈链路的负载量降低60%左右，缓解网络拥塞。
Wage and Labor Standards Administration (DOL), Washington, DC.
The 1966 amendments to the Fair Labor Standards Act extended enterprise coverage to all public and private educational institutions. In October 1968, one out of seven of the 2 million nonsupervisory nonteaching employees working in schools was paid below the $1.30 minimum wage which became effective on February 1, 1969. Three-fifths of those below…
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Ludmila Bardin
2010-01-01
Full Text Available Desenvolveram-se, neste trabalho modelos de estimativa da temperatura do ar com base em fatores geográficos, visando estimar os valores máximos e mínimos médios mensais e anuais na região compreendida pelos municípios que compõem o Polo Turístico do Circuito das Frutas do Estado de São Paulo. Obtiveram-se equações de regressão múltipla em função da altitude, latitude e longitude e simples em função da altitude, cujos coeficientes de determinação variam entre 0,91 a 0,96, para as temperaturas máximas e 0,71 a 0,94 para as mínimas e se apresentam as variações espaciais das temperaturas máximas e mínimas médias mensais e anuais da região de estudo na forma de mapas.Multiple regression equations to estimate mean monthy and annual maximum and minimum temperatures were developed as a function of altitude, latitude, and longitude for the "Pólo Turístico do Circuito das Frutas" region. The obtained correlation coefficients varied from 0.91 to 0.96 and 0.71 to 0.94 of the maximum and minimum air temperature, respectively. Also, maps with the spacial variability of the maximum and minimum mean monthly and annual temperatures are presented for the region.
Carlos Rogério de Mello
2010-04-01
Full Text Available Vazões máximas são grandezas hidrológicas aplicadas a projetos de obras hidráulicas e vazões mínimas são utilizadas para a avaliação das disponibilidades hídricas em bacias hidrográficas e comportamento do escoamento subterrâneo. Neste estudo, objetivou-se à construção de intervalos de confiança estatísticos para vazões máximas e mínimas diárias anuais e sua relação com as características fisiográficas das 6 maiores bacias hidrográficas da região Alto Rio Grande à montante da represa da UHE-Camargos/CEMIG. As distribuições de probabilidades Gumbel e Gama foram aplicadas, respectivamente, para séries históricas de vazões máximas e mínimas, utilizando os estimadores de Máxima Verossimilhança. Os intervalos de confiança constituem-se em uma importante ferramenta para o melhor entendimento e estimativa das vazões, sendo influenciado pelas características geológicas das bacias. Com base nos mesmos, verificou-se que a região Alto Rio Grande possui duas áreas distintas: a primeira, abrangendo as bacias Aiuruoca, Carvalhos e Bom Jardim, que apresentaram as maiores vazões máximas e mínimas, significando potencialidade para cheias mais significativas e maiores disponibilidades hídricas; a segunda, associada às bacias F. Laranjeiras, Madre de Deus e Andrelândia, que apresentaram as menores disponibilidades hídricas.Maximum discharges are applied to hydraulic structure design and minimum discharges are used to characterize water availability in hydrographic basins and subterranean flow. This study is aimed at estimating the confidence statistical intervals for maximum and minimum annual discharges and their relationship wih the physical characteristics of basins in the Alto Rio Grande Region, State of Minas Gerais. The study was developed for the six (6 greatest Alto Rio Grande Region basins at upstream of the UHE-Camargos/CEMIG reservoir. Gumbel and Gama probability distribution models were applied to the
The Second Law Today: Using Maximum-Minimum Entropy Generation
Umberto Lucia
2015-11-01
Full Text Available There are a great number of thermodynamic schools, independent of each other, and without a powerful general approach, but with a split on non-equilibrium thermodynamics. In 1912, in relation to the stationary non-equilibrium states, Ehrenfest introduced the fundamental question on the existence of a functional that achieves its extreme value for stable states, as entropy does for the stationary states in equilibrium thermodynamics. Today, the new branch frontiers of science and engineering, from power engineering to environmental sciences, from chaos to complex systems, from life sciences to nanosciences, etc. require a unified approach in order to optimize results and obtain a powerful approach to non-equilibrium thermodynamics and open systems. In this paper, a generalization of the Gouy–Stodola approach is suggested as a possible answer to the Ehrenfest question.
The Best Defense: Making Maximum Sense of Minimum Deterrence
2011-06-01
chief nuclear scientist, Homi Bhabha , looked to nuclear power as a means to fuel India‘s economic development and bring India to the upper echelon of...nuclear weapons. Homi Bhabha pressed Prime Minister Shastri to approve a nuclear test so that India could both showcase the...removed from being capable of conducting a nuclear test explosion.8 The untimely death of Homi Bhabha in 1966 and war with Pakistan in 1971 served to
Kernel maximum autocorrelation factor and minimum noise fraction transformations
Nielsen, Allan Aasbjerg
2010-01-01
) dimensional feature space via the kernel function and then performing a linear analysis in that space. Three examples show the very successful application of kernel MAF/MNF analysis to 1) change detection in DLR 3K camera data recorded 0.7 seconds apart over a busy motorway, 2) change detection...
董丹宏; 黄刚
2015-01-01
Based on daily maximum and minimum temperature data from 740 homogenized surface meteorological stations, the present study investigates the regional characteristics of the temperature trend and the dependence of maximum and minimum temperature and diurnal temperature range changes on the altitude during the period 1963–2012. It is found that the magnitude of minimum temperature increase is larger than that of the maximum temperature increase. The significant warming areas are located at high altitude, all of which increase remarkably in size during the study period. The maximum and minimum temperature and diurnal temperature range trends increase with altitude, except in spring. The correlation coefficients between the maximum temperature trend and altitude are the highest. At the same altitude, the amplitudes of maximum and minimum temperature show inconsistency: They exhibit increasing trends in the 1990s, with significant change at low altitude; they change minimally in the 1980s; and at high altitudes (above 2000 m), the magnitudes of their changes are weak before the 1990s but stronger in the last 10 years of the study period. The seasonal variability of the diurnal temperature range is large over 2000 m, decreasing in summer but increasing in winter. Before the 1990s, there is no significant variation between maximum and minimum temperature and altitude. However, their trends almost all decrease and then increase with altitude in the last 20 years. Additionally, the response to climate in highland areas is more sensitive than that in lowland areas.%本文利用中国740个气象台站1963～2012年均一化逐日最高温度和最低温度资料，分析了中国地区最高、最低气温和日较差变化趋势的区域特征及其与海拔高度的关系。结果表明：近50年气温的变化趋势无论是年或季节变化，最低温度的增温幅度都高于最高温度，且其增温显著区域都对应我国高海拔地区。除了春季，
Orme, John S.; Nobbs, Steven G.
1995-01-01
The minimum fuel mode of the NASA F-15 research aircraft is designed to minimize fuel flow while maintaining constant net propulsive force (FNP), effectively reducing thrust specific fuel consumption (TSFC), during cruise flight conditions. The test maneuvers were at stabilized flight conditions. The aircraft test engine was allowed to stabilize at the cruise conditions before data collection initiated; data were then recorded with performance seeking control (PSC) not-engaged, then data were recorded with the PSC system engaged. The maneuvers were flown back-to-back to allow for direct comparisons by minimizing the effects of variations in the test day conditions. The minimum fuel mode was evaluated at subsonic and supersonic Mach numbers and focused on three altitudes: 15,000; 30,000; and 45,000 feet. Flight data were collected for part, military, partial, and maximum afterburning power conditions. The TSFC savings at supersonic Mach numbers, ranging from approximately 4% to nearly 10%, are in general much larger than at subsonic Mach numbers because of PSC trims to the afterburner.
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
The minimum work requirement for distillation processes
Yunus, Cerci; Yunus, A. Cengel; Byard, Wood [Nevada Univ., Las Vegas, NV (United States). Dept. of Mechanical Engineering
2000-07-01
A typical ideal distillation process is proposed and analyzed using the first and second-laws of thermodynamics with particular attention to the minimum work requirement for individual processes. The distillation process consists of an evaporator, a condenser, a heat exchanger, and a number of heaters and coolers. Several Carnot engines are also employed to perform heat interactions of the distillation process with the surroundings and determine the minimum work requirement for processes. The Carnot engines give the maximum possible work output or the minimum work input associated with the processes, and therefore the net result of these inputs and outputs leads to the minimum work requirement for the entire distillation process. It is shown that the minimum work relation for the distillation process is the same as the minimum work input relation found by Cerci et al [1] for an incomplete separation of incoming saline water, and depends only on the properties of the incoming saline water and the outgoing pure water and brine. Also, certain aspects of the minimum work relation found are discussed briefly. (authors)
Effects of protocol design on lactate minimum power.
Johnson, M A; Sharpe, G R
2011-03-01
The aim of this investigation was to use a validated lactate minimum test protocol and evaluate whether blood lactate responses and the lactate minimum power are influenced by the starting power (study 1) and 1 min inter-stage rest intervals (study 2) during the incremental phase. Study 1: 8 subjects performed a lactate minimum test comprising a lactate elevation phase, recovery phase, and incremental phase comprising 5 continuous 4 min stages with starting power being 40% or 45% of the maximum power achieved during the lactate elevation phase, and with power increments of 5% maximum power. Study 2: 8 subjects performed 2 identical lactate minimum tests except that during one of the tests the incremental phase included 1 min inter-stage rest intervals. The lactate minimum power was lower when the incremental phase commenced at 40% (175±29 W) compared to 45% (184±30 W) maximum power (pvalidity and therefore training status evaluation and exercise prescription.
Reference respiratory waveforms by minimum jerk model analysis
Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka; Yagi, Masashi; Mizuno, Hirokazu; Ogawa, Kazuhiko [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Yamadaoka 2-2, Suita-shi, Osaka 565-0871 (Japan); Ota, Seiichi [Department of Medical Technology, Osaka University Hospital, Yamadaoka 2-15, Suita-shi, Osaka 565-0871 (Japan)
2015-09-15
Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimum jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy
Simple and Three-Valued Simple Minimum Coloring Games
Musegaas, Marieke; Borm, Peter; Quant, Marieke
2015-01-01
In this paper minimum coloring games are considered. We characterize the type of conflict graphs inducing simple or three-valued simple minimum coloring games. We provide an upper bound on the number of maximum cliques of conflict graphs inducing such games. Moreover, a characterization of the core
Rising above the Minimum Wage.
Even, William; Macpherson, David
An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…
Cardinal, Jean; Joret, Gwenaël
2008-01-01
We study graph orientations that minimize the entropy of the in-degree sequence. The problem of finding such an orientation is an interesting special case of the minimum entropy set cover problem previously studied by Halperin and Karp [Theoret. Comput. Sci., 2005] and by the current authors [Algorithmica, to appear]. We prove that the minimum entropy orientation problem is NP-hard even if the graph is planar, and that there exists a simple linear-time algorithm that returns an approximate solution with an additive error guarantee of 1 bit. This improves on the only previously known algorithm which has an additive error guarantee of log_2 e bits (approx. 1.4427 bits).
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
DILEMATIKA PENETAPAN UPAH MINIMUM
. Pitaya
2015-02-01
Full Text Available In the effort of creating appropiate wage for employees, it is necessary to determine the wages by considering the increase of poverty without ignoring the increase of productivity, the progressivity of companies and the growth of economic. The new minimum wages in the provincial level and the regoinal/municipality level have been implemented per 1st January in Indonesia since 2001. The determination of minimum wage for provinvial level should be done 30 days before 1st January, whereas the determination of minimumwage for regional/municipality level should be done 40 days before 1st January. Moreover,there is an article which governs thet the minimumwage will be revised annually. By considering the time of determination and the time of revision above,it can be predicted that before and after the determination date will be crucial time. This is because the controversy among parties in industrial relationships will arise. The determination of minimum wage will always become a dilemmatic step which has to be done by the Government. Through this policy, on one side the government attempts to attract many investors, however, on the other side the government also has to protect the employees in order to have the appropiate wage in accordance with the standard of living.
Minimum quality standards and exports
2015-01-01
This paper studies the interaction of a minimum quality standard and exports in a vertical product differentiation model when firms sell global products. If ex ante quality of foreign firms is lower (higher) than the quality of exporting firms, a mild minimum quality standard in the home market hinders (supports) exports. The minimum quality standard increases quality in both markets. A welfare maximizing minimum quality standard is always lower under trade than under autarky. A minimum quali...
Minimum Variance Portfolios in the Brazilian Equity Market
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Do Minimum Wages Fight Poverty?
David Neumark; William Wascher
1997-01-01
The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Minimum wages, earnings, and migration
Boffy-Ramirez, Ernest
2013-01-01
Does increasing a state’s minimum wage induce migration into the state? Previous literature has shown mobility in response to welfare benefit differentials across states, yet few have examined the minimum wage as a cause of mobility...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Kavitha, Telikepalli; Nimbhorkar, Prajakta
2010-01-01
We consider an extension of the {\\em popular matching} problem in this paper. The input to the popular matching problem is a bipartite graph G = (A U B,E), where A is a set of people, B is a set of items, and each person a belonging to A ranks a subset of items in an order of preference, with ties allowed. The popular matching problem seeks to compute a matching M* between people and items such that there is no matching M where more people are happier with M than with M*. Such a matching M* is called a popular matching. However, there are simple instances where no popular matching exists. Here we consider the following natural extension to the above problem: associated with each item b belonging to B is a non-negative price cost(b), that is, for any item b, new copies of b can be added to the input graph by paying an amount of cost(b) per copy. When G does not admit a popular matching, the problem is to "augment" G at minimum cost such that the new graph admits a popular matching. We show that this problem is...
Social Security's special minimum benefit.
Olsen, K A; Hoffmeyer, D
Social Security's special minimum primary insurance amount (PIA) provision was enacted in 1972 to increase the adequacy of benefits for regular long-term, low-earning covered workers and their dependents or survivors. At the time, Social Security also had a regular minimum benefit provision for persons with low lifetime average earnings and their families. Concerns were rising that the low lifetime average earnings of many regular minimum beneficiaries resulted from sporadic attachment to the covered workforce rather than from low wages. The special minimum benefit was seen as a way to reward regular, low-earning workers without providing the windfalls that would have resulted from raising the regular minimum benefit to a much higher level. The regular minimum benefit was subsequently eliminated for workers reaching age 62, becoming disabled, or dying after 1981. Under current law, the special minimum benefit will phase out over time, although it is not clear from the legislative history that this was Congress's explicit intent. The phaseout results from two factors: (1) special minimum benefits are paid only if they are higher than benefits payable under the regular PIA formula, and (2) the value of the regular PIA formula, which is indexed to wages before benefit eligibility, has increased faster than that of the special minimum PIA, which is indexed to inflation. Under the Social Security Trustees' 2000 intermediate assumptions, the special minimum benefit will cease to be payable to retired workers attaining eligibility in 2013 and later. Their benefits will always be larger under the regular benefit formula. As policymakers consider Social Security solvency initiatives--particularly proposals that would reduce benefits or introduce investment risk--interest may increase in restoring some type of special minimum benefit as a targeted protection for long-term low earners. Two of the three reform proposals offered by the President's Commission to Strengthen
DETERMINING MINIMUM HIKING TIME USING DEM
ZSOLT MAGYARI-SÁSKA
2012-11-01
Full Text Available Determining minimum hiking time using DEM. Minimum hiking time calculus can be used to assess the maximum area where a lost person can be. Such area delimitation can help rescue teams to efficiently organize their missions. The two well known walking time rules was used to determine, compare and correlate the obtained result in a test area. The calculated times has a high correlation coefficient which makes possible a precise conversion between Naismith and Tobler walking times. For delimiting the rescue area a graph based modeling from a raster layer was implemented using R environment. The main challenge in such a modeling is the efficient memory management as the use of Dijkstra algorithm on directional costgraph requires high memory resources.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Measurement of Minimum Bias Observables with ATLAS
Kvita, Jiri; The ATLAS collaboration
2017-01-01
The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes. It has also a significant relevance for the simulation of the environment at the LHC with many concurrent pp interactions (“pileup”). The ATLAS collaboration has provided new measurements of the inclusive charged particle multiplicity and its dependence on transverse momentum and pseudorapidity in special data sets with low LHC beam currents, recorded at center of mass energies of 8 TeV and 13 TeV. The measurements cover a wide spectrum using charged particle selections with minimum transverse momentum of both 100 MeV and 500 MeV and in various phase space regions of low and high charged particle multiplicities.
2010-07-01
... flow rate Hourly 1×hour ✔ ✔ Minimum pressure drop across the wet scrubber or minimum horsepower or... scrubber followed by fabric filter Wet scrubber Dry scrubber followed by fabric filter and wet scrubber Maximum operating parameters: Maximum charge rate Continuous 1×hour ✔ ✔ ✔ Maximum fabric filter...
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
The periodicity of Grand Solar Minimum
Velasco Herrera, Victor Manuel
2016-07-01
The sunspot number is the most used index to quantify the solar activity. Nevertheless, the sunspot is a syn- thetic index and not a physical index. Therefore, we should be careful to use the sunspot number to quantify the low (high) solar activity. One of the major problems of using sunspot to quantify solar activity is that its minimum value is zero. This zero value hinders the reconstruction of the solar cycle during the Maunder minimum. All solar indexes can be used as analog signals, which can be easily converted into digital signals. In con- trast, the conversion of a digital signal into an analog signal is not in general a simple task. The sunspot number during the Maunder minimum can be studied as a digital signal of the solar activity In 1894, Maunder published a discovery that has maintained the Solar Physics in an impasse. In his fa- mous work on "A Prolonged Sunspot Minimum" Maunder wrote: "The sequence of maximum and minimum has, in fact, been unfailing during the present century [..] and yet there [..], the ordinary solar cycle was once interrupted, and one long period of almost unbroken quiescence prevailed". The search of new historical Grand solar minima has been one of the most important questions in Solar Physics. However, the possibility of estimating a new Grand solar minimum is even more valuable. Since solar activity is the result of electromagnetic processes; we propose to employ the power to quantify solar activity: this is a fundamental physics concept in electrodynamics. Total Solar Irradiance is the primary energy source of the Earth's climate system and therefore its variations can contribute to natural climate change. In this work, we propose to consider the fluctuations in the power of the Total Solar Irradiance as a physical measure of the energy released by the solar dynamo, which contributes to understanding the nature of "profound solar magnetic field in calm". Using a new reconstruction of the Total Solar Irradiance we found the
Minimum signals in classical physics
邓文基; 许基桓; 刘平
2003-01-01
The bandwidth theorem for Fourier analysis on any time-dependent classical signal is shown using the operator approach to quantum mechanics. Following discussions about squeezed states in quantum optics, the problem of minimum signals presented by a single quantity and its squeezing is proposed. It is generally proved that all such minimum signals, squeezed or not, must be real Gaussian functions of time.
On the Minimum Induced Drag of Wings
Bowers, Albion H.
2011-01-01
Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb.
Defining a Minimum End Mill Diameter
A. E. Dreval'
2015-01-01
Full Text Available Industrial observations show that the standard mill designs in many cases do not provide a complete diversity of manufacturing operations, and a lot of enterprises are forced to design and manufacture special (original designs of tools. The information search has revealed a lack of end mill diameter calculations in publications. There is a proposal to calculate the end mill diameter either by empirical formulas [2, 3], or by selection from the tables [4].To estimate a minimum diameter of the end mill to perform the specified manufacturing operations based on the mill body strength the formulas are obtained. The initial data for calculation are the flow sheet of milling operation and properties of processed and tool materials. The end mill is regarded, as a cantilevered beam of the circular cross section having Dс diameter (mill core diameter with overhang Lв from rigid fixing and loaded by the maximum bending force and torque.In deriving the formulas were used the following well-reasoned assumptions based on the analysed sizes of the structural elements of the standard mills: a diameter of mill core is linearly dependent on the mill diameter and the overhang; the 4τ 2 to σ 2 4τ2 ratio is constant and equal to 0.065 for contour milling and 0.17 for slot milling.The formulas for calculating the minimum diameter are as follows: 3 обр в 1 121 1.1 K S L L D m C z for contour milling; 3 обр в 1 207 1.1 K S L L D m C z for slot milling.Obtained dependences that allow defining a minimum diameter of the end mill in terms of ensuring its strength can be used to design mills for contour milling with radius transition sections, holes of different diameters in the body parts and other cases when for processing a singlemill is preferable.Using the proposed dependencies for calculating a feed of the maximum tolerable strength is reasonable in designing the mills for slots.Assumptions used in deriving
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Split-plot fractional designs: Is minimum aberration enough?
Kulahci, Murat; Ramirez, Jose; Tobias, Randy
2006-01-01
Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....
Split-plot fractional designs: Is minimum aberration enough?
Kulahci, Murat; Ramirez, Jose; Tobias, Randy
2006-01-01
Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....
Cook, Philip
2013-01-01
A minimum voting age is defended as the most effective and least disrespectful means of ensuring all members of an electorate are sufficiently competent to vote. Whilst it may be reasonable to require competency from voters, a minimum voting age should be rejected because its view of competence is unreasonably controversial, it is incapable of defining a clear threshold of sufficiency and an alternative test is available which treats children more respectfully. This alternative is a procedura...
ANALYSIS AND ESTIMATION OF THE TRIE MINIMUM LEVEL IN NON-HASH DEDUPLICATION SYSTEM
M. A. Zhukov
2015-05-01
Full Text Available Subject of research. The paper deals with a method of restriction for the trie minimum level in non-hash data deduplication system. Method. The subject matter of the method lies in forcibly completing the trie to a specific minimum level. The proposed method makes it possible to increase performance of the process by reducing the number of collisions at the lower levels ofthe trie. The maximum theoretical performance growth corresponds to the share of collisions in the total number of data read operations from the storage medium. Proposed method application increases the metadata size to the amount of new structures containing one element. Main results. The results of the work have been proved by the data of computational experiment with non-has deduplication on 528 GB data set. The process analysis has shown that 99% of the execution time is taken to head positioning of hard-drives. The reason is a random distribution of the blocks on the storage medium. Application of the method of minimum level restriction for the trie in non-hash data deduplication system on the experimental data set gives the possibility to increase performance maximum by 16% and the increase of metadata size is 49%. The total amount of metadata is 34% less than with hash-based deduplication using the MD5 algorithm, and is 17% less than using Tiger192 algorithm. These results confirm the effectiveness of the proposed method. Practical relevance. The proposed method increases the performance of deduplication process by reducing the number of collisions in the trie construction. The results are of practical importance for professionals involved in the development of non-hash data deduplication methods.
Toward better application of minimum area requirements in conservation planning
Pe’er, G.; Tsianou, M.A.; Franz, K.W.; Matsinos, Y.G.; Mazaris, A.D.; Storch, D.; Kopsova, L.; Verboom, J.; Baguette, M.; Stevens, V.M.; Henle, K.
2014-01-01
The Minimum Area Requirements (MAR) of species is a concept that explicitly addresses area and therefore can be highly relevant for conservation planning and policy. This study compiled a comprehensive database of MAR estimates from the literature, covering 216 terrestrial animal species from 80
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Strong Solar Control of Infrared Aurora on Jupiter: Correlation Since the Last Solar Maximum
Kostiuk, T.; Livengood, T. A.; Hewagama, T.
2009-01-01
Polar aurorae in Jupiter's atmosphere radiate throughout the electromagnetic spectrum from X ray through mid-infrared (mid-IR, 5 - 20 micron wavelength). Voyager IRIS data and ground-based spectroscopic measurements of Jupiter's northern mid-IR aurora, acquired since 1982, reveal a correlation between auroral brightness and solar activity that has not been observed in Jovian aurora at other wavelengths. Over nearly three solar cycles, Jupiter auroral ethane emission brightness and solar 10.7 cm radio flux and sunspot number are positively correlated with high confidence. Ethane line emission intensity varies over tenfold between low and high solar activity periods. Detailed measurements have been made using the GSFC HIPWAC spectrometer at the NASA IRTF since the last solar maximum, following the mid-IR emission through the declining phase toward solar minimum. An even more convincing correlation with solar activity is evident in these data. Current analyses of these results will be described, including planned measurements on polar ethane line emission scheduled through the rise of the next solar maximum beginning in 2009, with a steep gradient to a maximum in 2012. This work is relevant to the Juno mission and to the development of the Europa Jupiter System Mission. Results of observations at the Infrared Telescope Facility (IRTF) operated by the University of Hawaii under Cooperative Agreement no. NCC5-538 with the National Aeronautics and Space Administration, Science Mission Directorate, Planetary Astronomy Program. This work was supported by the NASA Planetary Astronomy Program.
Minimum Expenses,Maximum Savings：How to Live in China Smartly
2011-01-01
For more information,please click www.echinacities.com While it＇s at least very annoying,and at most woefully erroneous that many Chinese people judge all foreigners to be totally minted,it＇s not hard to see why,when many foreigners are here living decadent lifestyles,partying on weekends（and weekdays）,travelling all over the country and mincing around town with Macbooks,iPods and Ray Bans.But then there are the secret ＂squirrelers＂
José Antonio Gutiérrez-Gallego
2015-01-01
Full Text Available Este artículo describe el diseño de un modelo de asignación de tráfico que predice flujos paracada segmento de una red urbana, con una mayor exactitud que el modelo tradicional de cuatroetapas, conservando además los orígenes y destinos de viaje. Los objetivos de investigación sondeterminar la intensidad de tráfico en áreas específicas de la red, e identificar los orígenes y destinosde los viajes para predecir cambios en la movilidad urbana. Para lograr estos objetivos, seutilizan bases de datos relacionales y un sistema de información geográfico con los que analizarla oferta de transporte (GIS-T. Este entorno de trabajo se completa con datos de entrevistas ahogares y encuestas de intercepción, para identificar los patrones de movilidad en la ciudad de tamaño medio de Mérida, España. Estos programas de aplicación pueden detectar cambios en lospatrones de movilidad y localizar áreas problemáticas. Los resultados obtenidos demuestran unalto grado de ajuste entre las predicciones y las observaciones de los viajes. Además, los nivelesde desagregación en cada sección del punto medio de la red combinada con el ajuste de datos depoblación mediante pirámides de población, evitan sesgos en las muestras de viaje.
Wage and Labor Standards Administration (DOL), Washington, DC.
This report describes the 1966 amendments to the Fair Labor Standards Act and summarizes the findings of three 1969 studies of the economic effects of these amendments. The studies found that economic growth continued through the third phase of the amendments, beginning February 1, 1969, despite increased wage and hours restrictions for recently…
The diversity-validity dilemma: In search of minimum adverse impact and maximum utility.
Callie Theron
2009-04-01
Full Text Available Selection from diverse groups of applicants poses the formidable challenge of developing valid selection procedures that simultaneously add value, do not discriminate unfairly and which minimise adverse impact. Valid selection procedures used in a fair, non-discriminatory manner that optimises utility, however, very often result in adverse impact against members of protected groups. More often than not, the assessment techniques used for selection are blamed for this. The conventional interpretation of adverse impact results in an erroneous diagnosis of the fundamental causes of the under-representation of protected group members and, consequently, in an inappropriate treatment of the problem.
"A minimum of urbanism and a maximum of ruralism": the Cuban experience.
Gugler, J
1980-01-01
The case of Cuba provides social scientists with reasonably good information on urbanization policies and their implementation in 1 developing country committed to socialism. The demographic context is considered, and Cuban efforts to eliminate the rural-urban contradiction and to redefine the role of Havana are described. The impact of these policies is analyzed in terms of available data on urbanization patterns since January 1959 when the revolutionaries marched into Havana. Prerevolutionary urbanization trends are considered. Fertility in Cuba has declined simultaneously with mortality and even more rapidly. Projections assume a 1.85% annual growth rate, resulting in a population of nearly 15 million by the year 2000. Any estimate regarding the future trend in population growth must depend on prognosis of general living conditions and of specific government policies regarding contraception, abortion, female labor force participation, and child care facilities. If population growth in Cuba has been substantial, but less dramatic than that of many other developing countries, urban growth presents a similar picture. Cuba's highest rate of growth of the population living in urban centers with a population over 20,000, in any intercensal period during the 20th century, was 4.1%/year for 1943-1953. It dropped to 3.0% in the 1953-1970 period. Government policies achieved a measure of success in stemming the tide of rural-urban migration, but the aims of the revolutionary leadership went further. The objective was for urban dwellers to be involved in agriculture, and the living standards of the rural population were to be raised to approximate those of city dwellers. The goal of "urbanizing" the countryside found expression in a program designed to construct new small towns which could more easily be provided with services. A slowdown in the growth of Havana, and the concomitant weakening of its dominant position, was intended by the revolutionary leadership. Offical policies have been enunciated that connect the reduction in the dominance of Havana with the slowdown in urban growth and the urbanization of the countryside. Evidence is presented which suggests achievements along all of these dimensions, but by 1970 they were, as yet, quite limited.
ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP
Gorbachov, P.
2013-01-01
Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.
Blasques, José Pedro Albergaria Amaral; Stolpe, Mathias
2011-01-01
and cross section geometry. The resulting finite element matrices are significantly smaller than those obtained using equivalent finite element models. This modeling approach is therefore an attractive alternative in computationally intensive applications at the conceptual design stage where the focus...
Gurung, Prabin
2015-01-01
The thesis was written in order to find workable ideas and techniques of ecotourism for sustainable development and to find out the importance of ecotourism. It illustrates how ecotourism can play a beneficial role to visitors and local people. The thesis was based on ecotourism and its impact, the case study was Sauraha and Chitwan National Park. How ecotourism can be fruitful to local residents and nature, what are the drawbacks of ecotourism? Ecotourism also has negative impacts on both th...
School violence: effective response protocols for maximum safety and minimum liability.
Miller, Laurence
2007-01-01
Despite the recent preoccupation with terrorism, most Americans are still killed by our own citizens, and school violence continues to be a significant source of mortality and trauma. This article describes the basic facts, features, and dynamics of school violence and presents a prevention, response, and recovery protocol adapted from the related field of workplace violence. This model may be used by educators, law enforcement professionals, and mental health clinicians in their collaborative efforts to make our academic institutions safer and healthier places to learn.
Minimum Q Electrically Small Antennas
Kim, O. S.
2012-01-01
for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q.......Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for the stored energies obtained through the vector spherical wave theory, it is shown that a magnetic-coated metal core reduces the internal stored energy of both TM1m and TE1m modes simultaneously, so that a self-resonant antenna with the Q approaching the fundamental minimum is created. Numerical results...
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Increasing the weight of minimum spanning trees
Frederickson, G.N.; Solis-Oba, R. [Purdue Univ., West Lafayette, IN (United States)
1996-12-31
Given an undirected connected graph G and a cost function for increasing edge weights, the problem of determining the maximum increase in the weight of the minimum spanning trees of G subject to a budget constraint is investigated. Two versions of the problem are considered. In the first, each edge has a cost function that is linear in the weight increase. An algorithm is presented that solves this problem in strongly polynomial time. In the second version, the edge weights are fixed but an edge can be removed from G at a unit cost. This version is shown to be NP-hard. An {Omega}(1/ log k)-approximation algorithm is presented for it, where k is the number of edges to be removed.
A transcribed emergency record at minimum cost.
Klimt, C R; Becker, S; Fox, B S; Ensminger, F
1983-09-01
We have developed a new method of implementing a transcribed emergency record at minimum cost. Dictated emergency records are typed immediately by a transcriber located in the emergency department. This member of the medical record transcriber pool is given other non-urgent medical record material to type when there are no emergency records to type. The costs are reduced to the same level as routine medical records transcription. In 1982, 19,892 of the total 28,000 emergency records were transcribed by adding only 1.35 full-time equivalents (FTEs) to the transcriber pool. The remaining charts were handwritten because insufficient funds had been allocated to type all emergency records. The transcriber is capable of typing a maximum of 64 charts, averaging 13 lines (156 words) each, per 8-hour shift. The service can be phased in gradually as funds for transcribing the emergency record are allocated to the central transcriber pool.
Why relevance theory is relevant for lexicography
Bothma, Theo; Tarp, Sven
2014-01-01
, socio-cognitive and affective relevance. It then shows, at the hand of examples, why relevance is important from a user perspective in the extra-lexicographical pre- and post-consultation phases and in the intra-lexicographical consultation phase. It defines an additional type of subjective relevance...... that is very important for lexicography as well as for information science, viz. functional relevance. Since all lexicographic work is ultimately aimed at satisfying users’ information needs, the article then discusses why the lexicographer should take note of all these types of relevance when planning a new...... dictionary project, identifying new tasks and responsibilities of the modern lexicographer. The article furthermore discusses how relevance theory impacts on teaching dictionary culture and reference skills. By integrating insights from lexicography and information science, the article contributes to new...
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Microbial oceanography of anoxic oxygen minimum zones
Ulloa, Osvaldo; Canfield, Donald E; DeLong, Edward F
2012-01-01
Vast expanses of oxygen-deficient and nitrite-rich water define the major oxygen minimum zones (OMZs) of the global ocean. They support diverse microbial communities that influence the nitrogen economy of the oceans, contributing to major losses of fixed nitrogen as dinitrogen (N(2)) and nitrous...... environmental genomics and geochemical studies show the presence of other relevant processes, particularly those associated with the sulfur and carbon cycles. AMZs correspond to an intermediate state between two "end points" represented by fully oxic systems and fully sulfidic systems. Modern and ancient AMZs...... and sulfidic basins are chemically and functionally related. Global change is affecting the magnitude of biogeochemical fluxes and ocean chemical inventories, leading to shifts in AMZ chemistry and biology that are likely to continue well into the future....
Minimum Thermal Conductivity of Superlattices
Simkin, M. V.; Mahan, G. D.
2000-01-31
The phonon thermal conductivity of a multilayer is calculated for transport perpendicular to the layers. There is a crossover between particle transport for thick layers to wave transport for thin layers. The calculations show that the conductivity has a minimum value for a layer thickness somewhat smaller then the mean free path of the phonons. (c) 2000 The American Physical Society.
Minimum aanlandingsmaat Brasem (Abramis brama)
Hal, van R.; Miller, D.C.M.
2016-01-01
Ter ondersteuning van een besluit aangaande een minimum aanlandingsmaat voor brasem, primair voor het IJsselmeer en Markermeer, heeft het ministerie van Economische Zaken IMARES verzocht een overzicht te geven van aanlandingsmaten voor brasem in andere landen en waar mogelijk de motivatie achter dez
Coupling between minimum scattering antennas
Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans
1974-01-01
Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...
Alderson, Tim L.; Svenja Huntemann
2013-01-01
Singleton-type upper bounds on the minimum Lee distance of general (not necessarily linear) Lee codes over ℤq are discussed. Two bounds known for linear codes are shown to also hold in the general case, and several new bounds are established. Codes meeting these bounds are investigated and in some cases characterised.
Minimum airflow reset of single-duct VAV terminal boxes
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and
Understanding the Minimum Wage: Issues and Answers.
Employment Policies Inst. Foundation, Washington, DC.
This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Quantum mechanics the theoretical minimum
Susskind, Leonard
2014-01-01
From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.
Kwee, R E; The ATLAS collaboration
2010-01-01
Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp-collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.09 < |eta| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presen...
Minimum thickness anterior porcelain restorations.
Radz, Gary M
2011-04-01
Porcelain laminate veneers (PLVs) provide the dentist and the patient with an opportunity to enhance the patient's smile in a minimally to virtually noninvasive manner. Today's PLV demonstrates excellent clinical performance and as materials and techniques have evolved, the PLV has become one of the most predictable, most esthetic, and least invasive modalities of treatment. This article explores the latest porcelain materials and their use in minimum thickness restoration.
Fingerprinting with Minimum Distance Decoding
Lin, Shih-Chun; Gamal, Hesham El
2007-01-01
This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\\approx 0.188$ is achievable using an ...
Minimum feature size preserving decompositions
Aloupis, Greg; Demaine, Martin L; Dujmovic, Vida; Iacono, John
2009-01-01
The minimum feature size of a crossing-free straight line drawing is the minimum distance between a vertex and a non-incident edge. This quantity measures the resolution needed to display a figure or the tool size needed to mill the figure. The spread is the ratio of the diameter to the minimum feature size. While many algorithms (particularly in meshing) depend on the spread of the input, none explicitly consider finding a mesh whose spread is similar to the input. When a polygon is partitioned into smaller regions, such as triangles or quadrangles, the degradation is the ratio of original to final spread (the final spread is always greater). Here we present an algorithm to quadrangulate a simple n-gon, while achieving constant degradation. Note that although all faces have a quadrangular shape, the number of edges bounding each face may be larger. This method uses Theta(n) Steiner points and produces Theta(n) quadrangles. In fact to obtain constant degradation, Omega(n) Steiner points are required by any al...
A discussion on maximum entropy production and information theory
Bruers, Stijn [Instituut voor Theoretische Fysica, Celestijnenlaan 200D, Katholieke Universiteit Leuven, B-3001 Leuven (Belgium)
2007-07-06
We will discuss the maximum entropy production (MaxEP) principle based on Jaynes' information theoretical arguments, as was done by Dewar (2003 J. Phys. A: Math. Gen. 36 631-41, 2005 J. Phys. A: Math. Gen. 38 371-81). With the help of a simple mathematical model of a non-equilibrium system, we will show how to derive minimum and maximum entropy production. Furthermore, the model will help us to clarify some confusing points and to see differences between some MaxEP studies in the literature.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Applicability of the minimum entropy generation method for optimizing thermodynamic cycles
Cheng Xue-Tao; Liang Xin-Gang
2013-01-01
Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations.In this paper,it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed.For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates,it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered.However,the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included,because the total heat into the system of interest is not fixed.An irreversible Carnot cycle and an irreversible Brayton cycle are analysed.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed.
Ceramic veneers with minimum preparation.
da Cunha, Leonardo Fernandes; Reis, Rachelle; Santana, Lino; Romanini, Jose Carlos; Carvalho, Ricardo Marins; Furuse, Adilson Yoshio
2013-10-01
The aim of this article is to describe the possibility of improving dental esthetics with low-thickness glass ceramics without major tooth preparation for patients with small to moderate anterior dental wear and little discoloration. For this purpose, a carefully defined treatment planning and a good communication between the clinician and the dental technician helped to maximize enamel preservation, and offered a good treatment option. Moreover, besides restoring esthetics, the restorative treatment also improved the function of the anterior guidance. It can be concluded that the conservative use of minimum thickness ceramic laminate veneers may provide satisfactory esthetic outcomes while preserving the dental structure.
A minimum income for healthy living.
Morris, J N; Donkin, A J; Wonderling, D; Wilkinson, P; Dowler, E A
2000-12-01
Half a century of research has provided consensual evidence of major personal requisites of adult health in nutrition, physical activity and psychosocial relations. Their minimal money costs, together with those of a home and other basic necessities, indicate disposable income that is now essential for health. In a first application we identified such representative minimal costs for healthy, single, working men aged 18-30, in the UK. Costs were derived from ad hoc survey, relevant figures in the national Family Expenditure Survey, and by pragmatic decision for the few minor items where survey data were not available. Minimum costs were assessed at 131.86 pound sterling per week (UK April 1999 prices). Component costs, especially those of housing (which represents around 40% of this total), depend on region and on several assumptions. By varying these a range of totals from 106.47 pound sterling to 163.86 pound sterling per week was detailed. These figures compare, 1999, with the new UK national minimum wage, after statutory deductions, of pound 105.84 at 18-21 years and 121.12 pound sterling at 22+ years for a 38 hour working week. Corresponding basic social security rates are 40.70 pound sterling to 51.40 pound sterling per week. Accumulating science means that absolute standards of living, "poverty", minimal official incomes and the like, can now be assessed by objective measurement of the personal capacity to meet the costs of major requisites of healthy living. A realistic assessment of these costs is presented as an impetus to public discussion. It is a historical role of public health as social medicine to lead in public advocacy of such a national agenda.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Lioma, Christina; Larsen, Birger; Petersen, Casper
2016-01-01
train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared......What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a single document? We present a preliminary study that makes a first step towards answering this question. Given a query, we...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....
How minimum detectable displacement in a GNSS Monitoring Network change?
Hilmi Erkoç, Muharrem; Doǧan, Uǧur; Aydın, Cüneyt
2016-04-01
The minimum detectable displacement in a geodetic monitoring network shows the displacement magnitude which may be just discriminated with known error probabilities. This displacement, which is originally deduced from sensitivity analysis, depends on network design, observation accuracy, datum of the network, direction of the displacement and power of the statistical test used for detecting the displacements. One may investigate how different scenarios on network design and observation accuracies influence the minimum detectable displacements for the specified datum, a-priorly forecasted directions and assumed power of the test and decide which scenario is the best or most optimum. It is sometimes difficult to forecast directions of the displacements. In that case, the minimum detectable displacements in a geodetic monitoring network are derived on the eigen-directions associated with the maximum eigen-values of the network stations. This study investigates how minimum detectable displacements in a GNSS monitoring network change depending on the accuracies of the network stations. For this, CORS-TR network in Turkey with 15 stations (a station fixed) is used. The data with 4h, 6h, 12 h and 24 h observing session duration in three sequential days of 2011, 2012 and 2013 were analyzed with Bernese 5.2 GNSS software. The repeatabilities of the daily solutions belonging to each year were analyzed carefully to scale the Bernese cofactor matrices properly. The root mean square (RMS) values for daily repeatability with respect to the combined 3-day solution are computed (the RMS values are generally less than 2 mm in the horizontal directions (north and east) and < 5 mm in the vertical direction for 24 h observing session duration). With the obtained cofactor matrices for these observing sessions, the minimum detectable displacements along the (maximum) eigen directions are compared each other. According to these comparisons, more session duration less minimum detectable
Kurz-Besson, Cathy B; Lousada, José L; Gaspar, Maria J; Correia, Isabel E; David, Teresa S; Soares, Pedro M M; Cardoso, Rita M; Russo, Ana; Varino, Filipa; Mériaux, Catherine; Trigo, Ricardo M; Gouveia, Célia M
2016-01-01
Western Iberia has recently shown increasing frequency of drought conditions coupled with heatwave events, leading to exacerbated limiting climatic conditions for plant growth. It is not clear to what extent wood growth and density of agroforestry species have suffered from such changes or recent extreme climate events. To address this question, tree-ring width and density chronologies were built for a Pinus pinaster stand in southern Portugal and correlated with climate variables, including the minimum, mean and maximum temperatures and the number of cold days. Monthly and maximum daily precipitations were also analyzed as well as dry spells. The drought effect was assessed using the standardized precipitation-evapotranspiration (SPEI) multi-scalar drought index, between 1 to 24-months. The climate-growth/density relationships were evaluated for the period 1958-2011. We show that both wood radial growth and density highly benefit from the strong decay of cold days and the increase of minimum temperature. Yet the benefits are hindered by long-term water deficit, which results in different levels of impact on wood radial growth and density. Despite of the intensification of long-term water deficit, tree-ring width appears to benefit from the minimum temperature increase, whereas the effects of long-term droughts significantly prevail on tree-ring density. Our results further highlight the dependency of the species on deep water sources after the juvenile stage. The impact of climate changes on long-term droughts and their repercussion on the shallow groundwater table and P. pinaster's vulnerability are also discussed. This work provides relevant information for forest management in the semi-arid area of the Alentejo region of Portugal. It should ease the elaboration of mitigation strategies to assure P. pinaster's production capacity and quality in response to more arid conditions in the near future in the region.
Kurz-Besson, Cathy B.; Lousada, José L.; Gaspar, Maria J.; Correia, Isabel E.; David, Teresa S.; Soares, Pedro M. M.; Cardoso, Rita M.; Russo, Ana; Varino, Filipa; Mériaux, Catherine; Trigo, Ricardo M.; Gouveia, Célia M.
2016-01-01
Western Iberia has recently shown increasing frequency of drought conditions coupled with heatwave events, leading to exacerbated limiting climatic conditions for plant growth. It is not clear to what extent wood growth and density of agroforestry species have suffered from such changes or recent extreme climate events. To address this question, tree-ring width and density chronologies were built for a Pinus pinaster stand in southern Portugal and correlated with climate variables, including the minimum, mean and maximum temperatures and the number of cold days. Monthly and maximum daily precipitations were also analyzed as well as dry spells. The drought effect was assessed using the standardized precipitation-evapotranspiration (SPEI) multi-scalar drought index, between 1 to 24-months. The climate-growth/density relationships were evaluated for the period 1958-2011. We show that both wood radial growth and density highly benefit from the strong decay of cold days and the increase of minimum temperature. Yet the benefits are hindered by long-term water deficit, which results in different levels of impact on wood radial growth and density. Despite of the intensification of long-term water deficit, tree-ring width appears to benefit from the minimum temperature increase, whereas the effects of long-term droughts significantly prevail on tree-ring density. Our results further highlight the dependency of the species on deep water sources after the juvenile stage. The impact of climate changes on long-term droughts and their repercussion on the shallow groundwater table and P. pinaster’s vulnerability are also discussed. This work provides relevant information for forest management in the semi-arid area of the Alentejo region of Portugal. It should ease the elaboration of mitigation strategies to assure P. pinaster’s production capacity and quality in response to more arid conditions in the near future in the region. PMID:27570527
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
Asymmetric k-Center with Minimum Coverage
Gørtz, Inge Li
2008-01-01
In this paper we give approximation algorithms and inapproximability results for various asymmetric k-center with minimum coverage problems. In the k-center with minimum coverage problem, each center is required to serve a minimum number of clients. These problems have been studied by Lim et al. [A....... Lim, B. Rodrigues, F. Wang, Z. Xu, k-center problems with minimum coverage, Theoret. Comput. Sci. 332 (1–3) (2005) 1–17] in the symmetric setting....
Minimum Delay Moving Object Detection
Lao, Dong
2017-01-08
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Minimum Competency Testing and the Handicapped.
Wildemuth, Barbara M.
This brief overview of minimum competency testing and disabled high school students discusses: the inclusion or exclusion of handicapped students in minimum competency testing programs; approaches to accommodating the individual needs of handicapped students; and legal issues. Surveys of states that have mandated minimum competency tests indicate…
Do Some Workers Have Minimum Wage Careers?
Carrington, William J.; Fallick, Bruce C.
2001-01-01
Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…
Does the Minimum Wage Affect Welfare Caseloads?
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
Minimum income protection in the Netherlands
van Peijpe, T.
2009-01-01
This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its effect
On the maximum entropy principle in non-extensive thermostatistics
Naudts, Jan
2004-01-01
It is possible to derive the maximum entropy principle from thermodynamic stability requirements. Using as a starting point the equilibrium probability distribution, currently used in non-extensive thermostatistics, it turns out that the relevant entropy function is Renyi's alpha-entropy, and not Tsallis' entropy.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Fuzziness and Relevance Theory
Grace Qiao Zhang
2005-01-01
This paper investigates how the phenomenon of fuzzy language, such as `many' in `Mary has many friends', can be explained by Relevance Theory. It is concluded that fuzzy language use conforms with optimal relevance in that it can achieve the greatest positive effect with the least processing effort. It is the communicators themselves who decide whether or not optimal relevance is achieved, rather than the language form (fuzzy or non-fuzzy) used. People can skillfully adjust the deployment of different language forms or choose appropriate interpretations to suit different situations and communication needs. However, there are two challenges to RT: a. to extend its theory from individual relevance to group relevance; b. to embrace cultural considerations (because when relevance principles and cultural protocols are in conflict, the latter tends to prevail).
Perceptions of document relevance
Peter eBruza
2014-07-01
Full Text Available This article presents a study of how humans perceive the relevance of documents.Humans are adept at making reasonably robust and quick decisions about what information is relevant to them, despite the ever increasing complexity and volume of their surrounding information environment. The literature on document relevance has identified various dimensions of relevance (e.g., topicality, novelty, etc., however little is understood about how these dimensions may interact.We performed a crowdsourced study of how human subjects judge two relevance dimensions in relation to document snippets retrieved from an internet search engine.The order of the judgement was controlled.For those judgements exhibiting an order effect, a q-test was performed to determine whether the order effects can be explained by a quantum decision model based on incompatible decision perspectives.Some evidence of incompatibility was found which suggests incompatible decision perspectives is appropriate for explaining interacting dimensions of relevance.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Relevance Theory in Translation
Shao Jun; Jiang Min
2008-01-01
In perspective of relevance theory, translation is regarded as communication. According to relevance theory, communication not only requires encoding, transfer and decoding processes, but also involves inference in addition. As communication, translation decision-making is also based on the human beings' inferential mental faculty. Concentrating on relevance theory, this paper tries to analyze and explain some translation phenomena in two English versions of Cai Gen Tan-My Crude Philosophy of Life.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Henning Grosse Ruse-Khan
2009-07-01
Full Text Available International intellectual property (IP protection is at the heart of controversies over the impact of economic interests on social or environmental concerns. Some see IP rights as unduly encroaching upon human rights and societal interests, others argue for stronger enforcement and additional exclusivity to incentivize new innovations and creations. Underlying these debates is the perception that international IP treaties set out minimum standards of protection - which presumably allow for additional protection with only the sky being the limit. This article challenges this view and explores the idea of maximum standards or ceilings within the existing body of international IP law. It looks at the relation between IP treaties and subsequent agreements or national laws which offer stronger protection. In particular, within the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS, an important qualification may serve as a door opener for ceilings: While additional IP protection may not go beyond mandatory limits within TRIPS, the qualification not to “contravene” TRIPS is unlikely to safeguard TRIPS flexibilities against TRIPS-plus norms. The article further identifies and examines the rationales for maximum standards in international IP protection as: (1 Legal security and predictability about the boundaries of protection; (2 the global protection of users’ rights; and (3 the free movement of goods, services and information. Examples of mandatory limits in the existing IP treaties and in ongoing initiatives can implement these. However, most of the relevant treaty norms are optional. The article concludes with some observations on the need for more comprehensive and precise maximum standards.
Eick, Charles; Deutsch, Bill; Fuller, Jennifer; Scott, Fletcher
2008-01-01
Science teachers are always looking for ways to demonstrate the relevance of science to students. By connecting science learning to important societal issues, teachers can motivate students to both enjoy and engage in relevant science (Bennet, Lubben, and Hogarth 2007). To develop that connection, teachers can help students take an active role in…
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Linear Minimum variance estimation fusion
ZHU Yunmin; LI Xianrong; ZHAO Juan
2004-01-01
This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.
Minimum Delay Moving Object Detection
Lao, Dong
2017-05-14
This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Dose variation during solar minimum
Gussenhoven, M.S.; Mullen, E.G.; Brautigam, D.H. (Phillips Lab., Geophysics Directorate, Hanscom Air Force Base, MA (US)); Holeman, E. (Boston Univ., MA (United States). Dept. of Physics)
1991-12-01
In this paper, the authors use direct measurement of dose to show the variation in inner and outer radiation belt populations at low altitude from 1984 to 1987. This period includes the recent solar minimum that occurred in September 1986. The dose is measured behind four thicknesses of aluminum shielding and for two thresholds of energy deposition, designated HILET and LOLET. The authors calculate an average dose per day for each month of satellite operation. The authors find that the average proton (HILET) dose per day (obtained primarily in the inner belt) increased systematically from 1984 to 1987, and has a high anticorrelation with sunspot number when offset by 13 months. The average LOLET dose per day behind the thinnest shielding is produced almost entirely by outer zone electrons and varies greatly over the period of interest. If any trend can be discerned over the 4 year period it is a decreasing one. For shielding of 1.55 gm/cm{sup 2} (227 mil) Al or more, the LOLET dose is complicated by contributions from {gt} 100 MeV protons and bremsstrahlung.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
A new solar signal: Average maximum sunspot magnetic fields independent of activity cycle
Livingston, William
2016-01-01
Over the past five years, 2010-2015, we have observed, in the near infrared (IR), the maximum magnetic field strengths for 4145 sunspot umbrae. Herein we distinguish field strengths from field flux. (Most solar magnetographs measure flux). Maximum field strength in umbrae is co-spatial with the position of umbral minimum brightness (Norton and Gilman, 2004). We measure field strength by the Zeeman splitting of the Fe 15648.5 A spectral line. We show that in the IR no cycle dependence on average maximum field strength (2050 G) has been found +/- 20 Gauss. A similar analysis of 17,450 spots observed by the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory reveal the same cycle independence +/- 0.18 G., or a variance of 0.01%. This is found not to change over the ongoing 2010-2015 minimum to maximum cycle. Conclude the average maximum umbral fields on the Sun are constant with time.
On-Off Minimum-Time Control With Limited Fuel Usage: Global Optima Via Linear Programming
DRIESSEN,BRIAN
1999-09-01
A method for finding a global optimum to the on-off minimum-time control problem with limited fuel usage is presented. Each control can take on only three possible values: maximum, zero, or minimum. The simplex method for linear systems naturally yields such a solution for the re-formulation presented herein because it always produces an extreme point solution to the linear program. Numerical examples for the benchmark linear flexible system are presented.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Are There Long-Run Effects of the Minimum Wage?
Sorkin, Isaac
2015-04-01
An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.
Reforming the minimum wage: Toward a psychological perspective.
Smith, Laura
2015-09-01
The field of psychology has periodically used its professional and scholarly platform to encourage national policy reform that promotes the public interest. In this article, the movement to raise the federal minimum wage is presented as an issue meriting attention from the psychological profession. Psychological support for minimum wage reform derives from health disparities research that supports the causal linkages between poverty and diminished physical and emotional well-being. Furthermore, psychological scholarship relevant to the social exclusion of low-income people not only suggests additional benefits of financially inclusive policymaking, it also indicates some of the attitudinal barriers that could potentially hinder it. Although the national living wage debate obviously extends beyond psychological parameters, psychologists are well-positioned to evaluate and contribute to it. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Measurement of Minimum Bias Observables with the ATLAS detector
Kvita, Jiri; The ATLAS collaboration
2017-01-01
The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes. It has also a significant relevance for the simulation of the environment at the LHC with many concurrent pp interactions (“pileup”). The ATLAS collaboration has provided new measurements of the inclusive charged particle multiplicity and its dependence on transverse momentum and pseudorapidity in special data sets with low LHC beam currents, recorded at center of mass energies of 8 TeV and 13 TeV. The measurements cover a wide spectrum using charged particle selections with minimum transverse momentum of both 100 MeV and 500 MeV and in various phase space regions of low and high charged particle multiplicities.
Criticisms of Relevance Theory
尚静; 孟晔; 焦丽芳
2006-01-01
This paper briefly introduces first the notion of Sperber and Wilson's Relevance Theory. Then, the motivation of S & W putting forward their RT is also mentioned. Secondly, the paper gives some details about the methodology of RT, in which ostensive-inferential communication, context and optimal relevance are highlighted. Thirdly, the paper focuses on the criticisms of RT from different areas of research on human language and communication. Finally, the paper draws a conclusion on the great importance of RT in pragmatics.
How Do Alternative Minimum Wage Variables Compare?
Sara Lemos
2005-01-01
Several minimum wage variables have been suggested in the literature. Such a variety of variables makes it difficult to compare the associated estimates across studies. One problem is that these estimates are not always calibrated to represent the effect of a 10% increase in the minimum wage. Another problem is that these estimates measure the effect of the minimum wage on the employment of different groups of workers. In this paper we critically compare employment effect estimates using five...
Minimum wages, globalization and poverty in Honduras
Gindling, T. H.; Terrell, Katherine
2008-01-01
To be competitive in the global economy, some argue that Latin American countries need to reduce or eliminate labour market regulations such as minimum wage legislation because they constrain job creation and hence increase poverty. On the other hand, minimum wage increases can have a direct positive impact on family income and may therefore help to reduce poverty. We take advantage of a complex minimum wage system in a poor country that has been exposed to the forces of globalization to test...
Tracking error with minimum guarantee constraints
Diana Barro; Elio Canestrelli
2008-01-01
In recent years the popularity of indexing has greatly increased in financial markets and many different families of products have been introduced. Often these products also have a minimum guarantee in the form of a minimum rate of return at specified dates or a minimum level of wealth at the end of the horizon. Period of declining stock market returns together with low interest rate levels on Treasury bonds make it more difficult to meet these liabilities. We formulate a dynamic asset alloca...
Effect of Pressure on Minimum Fluidization Velocity
Zhu Zhiping; Na Yongjie; Lu Qinggang
2007-01-01
Minimum fluidization velocity of quartz sand and glass bead under different pressures of 0.5, 1.0, 1.5 and 2.0 Mpa were investigated. The minimum fluidization velocity decreases with the increasing of pressure. The influence of pressure to the minimum fluidization velocities is stronger for larger particles than for smaller ones.Based on the test results and Ergun equation, an experience equation of minimum fluidization velocity is proposed and the calculation results are comparable to other researchers' results.
7 CFR 35.11 - Minimum requirements.
2010-01-01
..., Denmark, East Germany, England, Finland, France, Greece, Hungary, Iceland, Ireland, Italy, Liechtenstein..., Switzerland, Wales, West Germany, Yugoslavia), or Greenland shall meet each applicable minimum requirement...
Microbial oceanography of anoxic oxygen minimum zones.
Ulloa, Osvaldo; Canfield, Donald E; DeLong, Edward F; Letelier, Ricardo M; Stewart, Frank J
2012-10-02
Vast expanses of oxygen-deficient and nitrite-rich water define the major oxygen minimum zones (OMZs) of the global ocean. They support diverse microbial communities that influence the nitrogen economy of the oceans, contributing to major losses of fixed nitrogen as dinitrogen (N(2)) and nitrous oxide (N(2)O) gases. Anaerobic microbial processes, including the two pathways of N(2) production, denitrification and anaerobic ammonium oxidation, are oxygen-sensitive, with some occurring only under strictly anoxic conditions. The detection limit of the usual method (Winkler titrations) for measuring dissolved oxygen in seawater, however, is much too high to distinguish low oxygen conditions from true anoxia. However, new analytical technologies are revealing vanishingly low oxygen concentrations in nitrite-rich OMZs, indicating that these OMZs are essentially anoxic marine zones (AMZs). Autonomous monitoring platforms also reveal previously unrecognized episodic intrusions of oxygen into the AMZ core, which could periodically support aerobic metabolisms in a typically anoxic environment. Although nitrogen cycling is considered to dominate the microbial ecology and biogeochemistry of AMZs, recent environmental genomics and geochemical studies show the presence of other relevant processes, particularly those associated with the sulfur and carbon cycles. AMZs correspond to an intermediate state between two "end points" represented by fully oxic systems and fully sulfidic systems. Modern and ancient AMZs and sulfidic basins are chemically and functionally related. Global change is affecting the magnitude of biogeochemical fluxes and ocean chemical inventories, leading to shifts in AMZ chemistry and biology that are likely to continue well into the future.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
1975-06-25
conjugates of the roots of AH V. Thus the forward prediction error filter is a minimum phase filter . Since its output does not precede any of its input points...circle. The inverse of the forward Li-18 prediction error filter is also a causal minimum phase filter . The inverse filter can be used to construct the...filter is a maximum phase filter (a minimum phase filter if the direction of time is reversed). When the maxi- mum entropy assumption is valid, it
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is id...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Investigation of the Minimum Conditions for Reliable Estimation of Clinically Relevant HRV Measures
Ahrens, Esben; Sørensen, Helge Bjarup Dissing; Langberg, Henning;
2015-01-01
The R-peak localization error (jitter) of a heart rate variability (HRV) system has a great impact on the values of the HRV measures. Only a few studies have analyzed this subject and purely done so from the aspect of choice of sampling frequency. In this study we provide an overview of the various...... factors that comprise the jitter of a system. We propose a method inspired by the field of signal averaged electrocardiography (SAECG) that allows for a quantification of the jitter of any HRV system that records and stores the raw ECG signal. Furthermore, with this method the differences between the HRV...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Stochastic variational approach to minimum uncertainty states
Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)
1995-05-21
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
2007-01-01
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
New Minimum Wage Research: A Symposium.
Ehrenberg, Ronald G.; And Others
1992-01-01
Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…
5 CFR 630.206 - Minimum charge.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum charge. 630.206 Section 630.206 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ABSENCE AND LEAVE Definitions and General Provisions for Annual and Sick Leave § 630.206 Minimum charge. (a) Unless an agency...
Stochastic variational approach to minimum uncertainty states
Illuminati, F; Illuminati, F; Viola, L
1995-01-01
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schr\\"{o}dinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials.
Monotonic Stable Solutions for Minimum Coloring Games
Hamers, H.J.M.; Miquel, S.; Norde, H.W.
2011-01-01
For the class of minimum coloring games (introduced by Deng et al. (1999)) we investigate the existence of population monotonic allocation schemes (introduced by Sprumont (1990)). We show that a minimum coloring game on a graph G has a population monotonic allocation scheme if and only if G is (P4,
Averill, M.; Briggle, A.
2006-12-01
Science policy and knowledge production lately have taken a pragmatic turn. Funding agencies increasingly are requiring scientists to explain the relevance of their work to society. This stems in part from mounting critiques of the "linear model" of knowledge production in which scientists operating according to their own interests or disciplinary standards are presumed to automatically produce knowledge that is of relevance outside of their narrow communities. Many contend that funded scientific research should be linked more directly to societal goals, which implies a shift in the kind of research that will be funded. While both authors support the concept of useful science, we question the exact meaning of "relevance" and the wisdom of allowing it to control research agendas. We hope to contribute to the conversation by thinking more critically about the meaning and limits of the term "relevance" and the trade-offs implicit in a narrow utilitarian approach. The paper will consider which interests tend to be privileged by an emphasis on relevance and address issues such as whose goals ought to be pursued and why, and who gets to decide. We will consider how relevance, narrowly construed, may actually limit the ultimate utility of scientific research. The paper also will reflect on the worthiness of research goals themselves and their relationship to a broader view of what it means to be human and to live in society. Just as there is more to being human than the pragmatic demands of daily life, there is more at issue with knowledge production than finding the most efficient ways to satisfy consumer preferences or fix near-term policy problems. We will conclude by calling for a balanced approach to funding research that addresses society's most pressing needs but also supports innovative research with less immediately apparent application.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Cosmic rays during the unusual solar minimum of 2009
Gil, Agnieszka
during 2007-2008 evolved to longer period (up to 33-36 days) during 2009. Alania et al. (2014, submitted to JGR) have reported that the 2009 growth in the GCR intensity mostly was related with drop in the solar wind velocity, the strength of the interplanetary magnetic field, and the drift during the negative polarity epoch. Frohlich (2009) argued that the recent minimum was caused by a global temperature decline of 0.2 K in the effective temperature of the Sun. Dikpati (2013) suggested that the reason of the prolonged and deep minimum was somehow different operation of solar dynamo. On the other hand, revisions of the proxies showed that the Maunder Minimum was the latest, but not the only, of the grand minimum ages of solar activity that occurred in the past (e.g. Jones et al., 2010). It might be the case that the last 23/24 solar minimum was the precursor of the end of the Modern grand maximum (e.g. Usoskin, 2013). References: 1.Alania M.V, R. Modzelewska, A. Wawrzynczak, 2014, submitted to JGR 2.Dikpati M., SSRv 176, 279-287, 2013 3.Fröhlich C., A&A 501, L27-L30, 2009 4.Gil A., R. Modzelewska, M.V Alania, AdSpR 50, 712-715, 2012 5.Jian L.K., C.T. Russell, J.G. Luhmann, SoPh 274, 321-344, 2011 6.Jones Ch.A., M.J. Thompson, S.M. Tobias, SSRv 152, 591-616, 2010 7.Kirk M. S., W.D. Pesnell, C. A. Young, S.A. Hess Webber, SoPh 257, 99-112, 2009 8.Leske R. A., A.C. Cummings, R.A. Mewaldt, E.C. Stone, SSRv 176, 253-263, 2013 9.McComas D.J., R.W. Ebert, H.A. Elliott, et al., GeoRL 35, CiteID L18103, 2008 10.Modzelewska R, M.V. Alania, SoPh 286, 593-607, 2013 11.Moraal H., P.H. Stoker, JGR 115, CiteID A12109, 2010 12.Smith E.J, JASTP 73, 277-289, 2011 13.Usoskin I.G., LRSP 10, doi 10.12942/lrsp-2013-1, 2013 14.Wang Y.-M., E. Robbrecht, N.R. Sheeley, ApJ. 707, 1372-1386, 2009
Estimating the Size and Timing of the Maximum Amplitude of Solar Cycle 24
Ke-Jun Li; Peng-Xin Gao; Tong-Wei Su
2005-01-01
A simple statistical method is used to estimate the size and timing of maximum amplitude of the next solar cycle (cycle 24). Presuming cycle 23 to be a short cycle (as is more likely), the minimum of cycle 24 should occur about December 2006 (±2 months) and the maximum, around March 2011 (±9 months),and the amplitude is 189.9 ± 15.5, if it is a fast riser, or about 136, if it is a slow riser. If we presume cycle 23 to be a long cycle (as is less likely), the minimum of cycle 24 should occur about June 2008 (±2 months) and the maximum, about February 2013 (±8 months) and the maximum will be about 137 or 80, according as the cycle is a fast riser or a slow riser.
An Improved CO2-Crude Oil Minimum Miscibility Pressure Correlation
Hao Zhang
2015-01-01
Full Text Available Minimum miscibility pressure (MMP, which plays an important role in miscible flooding, is a key parameter in determining whether crude oil and gas are completely miscible. On the basis of 210 groups of CO2-crude oil system minimum miscibility pressure data, an improved CO2-crude oil system minimum miscibility pressure correlation was built by modified conjugate gradient method and global optimizing method. The new correlation is a uniform empirical correlation to calculate the MMP for both thin oil and heavy oil and is expressed as a function of reservoir temperature, C7+ molecular weight of crude oil, and mole fractions of volatile components (CH4 and N2 and intermediate components (CO2, H2S, and C2~C6 of crude oil. Compared to the eleven most popular and relatively high-accuracy CO2-oil system MMP correlations in the previous literature by other nine groups of CO2-oil MMP experimental data, which have not been used to develop the new correlation, it is found that the new empirical correlation provides the best reproduction of the nine groups of CO2-oil MMP experimental data with a percentage average absolute relative error (%AARE of 8% and a percentage maximum absolute relative error (%MARE of 21%, respectively.
Dunham, L. L.
1971-01-01
The "legacy" of the humanities is discussed in terms of relevance, involvement, and other philosophical considerations. Reasons for studying foreign literature in language classes are developed in the article. Comment is also made on attitudes and ideas culled from the writings of Clifton Fadiman, Jean Paul Sartre, and James Baldwin. (RL)
Müller, Emmanuel; Assent, Ira; Günnemann, Stephan
2009-01-01
. We prove that computation of this model is NP-hard. For RESCU, we propose an approximative solution that shows high accuracy with respect to our relevance model. Thorough experiments on synthetic and real world data show that RESCU successfully reduces the result to manageable sizes. It reliably...... achieves top clustering quality while competing approaches show greatly varying performance....
Is Information Still Relevant?
Ma, Lia
2013-01-01
Introduction: The term "information" in information science does not share the characteristics of those of a nomenclature: it does not bear a generally accepted definition and it does not serve as the bases and assumptions for research studies. As the data deluge has arrived, is the concept of information still relevant for information…
Müller, Emmanuel; Assent, Ira; Günnemann, Stephan;
2009-01-01
Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace c...
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
EXPERIMENTAL STUDY OF MINIMUM IGNITION TEMPERATURE
Igor WACHTER
2015-12-01
Full Text Available The aim of this scientific paper is an analysis of the minimum ignition temperature of dust layer and the minimum ignition temperatures of dust clouds. It could be used to identify the threats in industrial production and civil engineering, on which a layer of combustible dust could occure. Research was performed on spent coffee grounds. Tests were performed according to EN 50281-2-1:2002 Methods for determining the minimum ignition temperatures of dust (Method A. Objective of method A is to determine the minimum temperature at which ignition or decomposition of dust occurs during thermal straining on a hot plate at a constant temperature. The highest minimum smouldering and carbonating temperature of spent coffee grounds for 5 mm high layer was determined at the interval from 280 °C to 310 °C during 600 seconds. Method B is used to determine the minimum ignition temperature of a dust cloud. Minimum ignition temperature of studied dust was determined to 470 °C (air pressure – 50 kPa, sample weight 0.3 g.
2010-07-01
... mercury (Hg) sorbent flow rate Hourly Once per hour ✔ ✔ Minimum pressure drop across the wet scrubber or... rural HMIWI HMIWI a with dry scrubber followed by fabric filter HMIWI a with wet scrubber HMIWI a with dry scrubber followed by fabric filter and wet scrubber Maximum operating parameters: Maximum...
Does the Minimum Wage Cause Inefficient Rationing?
何满辉; 梁明秋
2008-01-01
By not allowing wages to dearthe labor market,the minimum wage could cause workers with low reservation wages to be rationed out while equally skilled woTkers with higher reservation wages are employed.I find that proxies for reservation wages of unskilled workers in high-impact stales did not rise relative to reservation wages in other states,suggesting that the increase in the minimum wage did not cause jobs to be allocated less efficiently.However,even if rationing is efficient,the minimum wage can still entail other efficiency costs.
Minimum emittance in TBA and MBA lattices
Xu, Gang; Peng, Yue-Mei
2015-03-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.
Clinical Relevance of Adipokines
Matthias Blüher
2012-10-01
Full Text Available The incidence of obesity has increased dramatically during recent decades. Obesity increases the risk for metabolic and cardiovascular diseases and may therefore contribute to premature death. With increasing fat mass, secretion of adipose tissue derived bioactive molecules (adipokines changes towards a pro-inflammatory, diabetogenic and atherogenic pattern. Adipokines are involved in the regulation of appetite and satiety, energy expenditure, activity, endothelial function, hemostasis, blood pressure, insulin sensitivity, energy metabolism in insulin sensitive tissues, adipogenesis, fat distribution and insulin secretion in pancreatic β-cells. Therefore, adipokines are clinically relevant as biomarkers for fat distribution, adipose tissue function, liver fat content, insulin sensitivity, chronic inflammation and have the potential for future pharmacological treatment strategies for obesity and its related diseases. This review focuses on the clinical relevance of selected adipokines as markers or predictors of obesity related diseases and as potential therapeutic tools or targets in metabolic and cardiovascular diseases.
Bergenholtz, Henning; Gouws, Rufus
2007-01-01
as detrimental to the status of a dictionary as a container of linguistic knowledge. This paper shows that, from a lexicographic perspective, such a distinction is not relevant. What is important is that definitions should contain information that is relevant to and needed by the target users of that specific......In explanatory dictionaries, both general language dictionaries and dictionaries dealing with languages for special purposes, the lexicographic definition is an important item to present the meaning of a given lemma. Due to a strong linguistic bias, resulting from an approach prevalent in the early...... phases of the development of theoretical lexicography, a distinction is often made between encyclopaedic information and semantic information in dictionary definitions, and dictionaries had often been criticized when their definitions were dominated by an encyclopaedic approach. This used to be seen...
Wildemuth, Barbara M.
2009-01-01
A user's interaction with a DL is often initiated as the result of the user experiencing an information need of some kind. Aspects of that experience and how it might affect the user's interactions with the DL are discussed in this module. In addition, users continuously make decisions about and evaluations of the materials retrieved from a DL, relative to their information needs. Relevance judgments, and their relationship to the user's information needs, are discussed in this module. Draft
Long Term Care Minimum Data Set (MDS)
U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...
Quantitative Research on the Minimum Wage
Goldfarb, Robert S.
1975-01-01
The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)
Impact of the Minimum Wage on Compression.
Wolfe, Michael N.; Candland, Charles W.
1979-01-01
Assesses the impact of increases in the minimum wage on salary schedules, provides guidelines for creating a philosophy to deal with the impact, and outlines options and presents recommendations. (IRT)
Long Term Care Minimum Data Set (MDS)
U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...
Minimum wages and employment in China
Fang, Tony; Lin, Carl
2015-01-01
... that minimum wage changes led to significant adverse effects on employment in the Eastern and Central regions of China, and resulted in disemployment for females, young adults, and low-skilled workers...
Minimum Wage Policy and Country's Technical Efficiency
Mohd Zaini Abd Karim; Sok-Gee Chan; Sallahuddin Hassan
2016-01-01
.... However, some quarters argued against the idea of a nationwide minimum wage asserting that it will lead to an increase in the cost of doing business and thus will hurt Malaysian competitiveness...
Graph theory for FPGA minimum configurations
Ruan Aiwu; Li Wenchang; Xiang Chuanyin; Song Jiangmin; Kang Shi; Liao Yongbo
2011-01-01
A traditional bottom-up modeling method for minimum configuration numbers is adopted for the study of FPGA minimum configurations.This method is limited ifa large number of LUTs and multiplexers are presented.Since graph theory has been extensively applied to circuit analysis and test,this paper focuses on the modeling FPGA configurations.In our study,an internal logic block and interconnections of an FPGA are considered as a vertex and an edge connecting two vertices in the graph,respectively.A top-down modeling method is proposed in the paper to achieve minimum configuration numbers for CLB and IOB.Based on the proposed modeling approach and exhaustive analysis,the minimum configuration numbers for CLB and IOB are five and three,respectively.
Price pass-through and minimum wages
Daniel Aaronson
1997-01-01
A textbook consequence of competitive markets is that an industry-wide increase in the price of inputs will be passed on to consumers through an increase in prices. This fundamental implication has been explored by researchers interested in who bears the burden of taxation and exchange rate fluctuations. However, little attention has focused on the price implications of minimum wage hikes. From a policy perspective, this is an oversight. Welfare analysis of minimum wage laws should not ignore...
The minimum wage and restaurant prices
Daniel Aaronson; Eric French; MacDonald, James M.
2004-01-01
Using both store-level and aggregated price data from the food away from home component of the Consumer Price Index survey, we show that restaurant prices rise in response to an increase in the minimum wage. These results hold up when using several different sources of variation in the data. We interpret these findings within a model of employment determination. The model implies that minimum wage hikes cause employment to fall and prices to rise if labor markets are competitive but potential...
Minimum Dominating Tree Problem for Graphs
LIN Hao; LIN Lan
2014-01-01
A dominating tree T of a graph G is a subtree of G which contains at least one neighbor of each vertex of G. The minimum dominating tree problem is to find a dominating tree of G with minimum number of vertices, which is an NP-hard problem. This paper studies some polynomially solvable cases, including interval graphs, Halin graphs, special outer-planar graphs and others.
Lower bounds on the maximum energy benefit of network coding for wireless multiple unicast
Goseling, Jasper; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Weber, Jos H.
2010-01-01
We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding
Lower bounds on the maximum energy benefit of network coding for wireless multiple unicast
Goseling, Jasper; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Weber, Jos H.
2010-01-01
We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding
Cathy Béatrice Kurz Besson
2016-08-01
Full Text Available Western Iberia has recently shown increasing frequency of drought conditions coupled with heatwave events, leading to exacerbated limiting climatic conditions for plant growth. It is not clear to what extent wood growth and density of agroforestry species have suffered from such changes or recent extreme climate events.To address this question, tree-ring width and density chronologies were built for a P. pinaster stand in southern Portugal and correlated with climate variables, including the minimum, mean and maximum temperatures and the number of cold days. Monthly and maximum daily precipitations were also analyzed as well as dry spells. The drought effect was assessed using the standardized precipitation-evapotranspiration (SPEI multi-scalar drought index, between 1 to 24-months. The climate-growth/density relationships were evaluated for the period 1958-2011.We show that both wood radial growth and density highly benefit from the strong decay of cold days and the increase of minimum temperature. Yet the benefits are hindered by long-term water deficit, which results in different levels of impact on wood radial growth and density. Despite of the intensification of long-term water deficit, tree-ring width appears to benefit from the minimum temperature increase, whereas the effects of long-term droughts significantly prevail on tree-ring density. Our results further highlight the dependency of the species on deep water sources after the juvenile stage. The impact of climate changes on long-term droughts and their repercussion on the shallow groundwater table and P. pinaster’s vulnerability are also discussed. This work provides relevant information for forest management in the semi-arid area of the Alentejo region of Portugal. It should ease the elaboration of mitigation strategies to assure P. pinaster’s production capacity and quality in response to more arid conditions in the near future in the region.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
The inverse maximum flow problem with lower and upper bounds for the flow
Deaconu Adrian
2008-01-01
Full Text Available The general inverse maximum flow problem (denoted GIMF is considered, where lower and upper bounds for the flow are changed so that a given feasible flow becomes a maximum flow and the distance (considering l1 norm between the initial vector of bounds and the modified vector is minimum. Strongly and weakly polynomial algorithms for solving this problem are proposed. In the paper it is also proved that the inverse maximum flow problem where only the upper bound for the flow is changed (IMF is a particular case of the GIMF problem.
Nursing Minimum Data Set for School Nursing Practice. Position Statement. Revised
Denehy, Janice
2012-01-01
It is the position of the National Association of School Nurses (NASN) to support the collection of essential nursing data as listed in the Nursing Minimum Data Set (NMDS). The NMDS provides a basic structure to identify the data needed to delineate nursing care delivered to clients as well as relevant characteristics of those clients. Structure…
Nowcasting daily minimum air and grass temperature
Savage, M. J.
2016-02-01
Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) errors for grass minimum temperature and the 4-h nowcasts.
Gabere MN
2016-06-01
Full Text Available Musa Nur Gabere,1 Mohamed Aly Hussein,1 Mohammad Azhar Aziz2 1Department of Bioinformatics, King Abdullah International Medical Research Center/King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia; 2Colorectal Cancer Research Program, Department of Medical Genomics, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Purpose: There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC. The selection of important features is a crucial step before training a classifier.Methods: In this study, we built a model that uses support vector machine (SVM to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid.Results: The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF, Bayes net (BN, multilayer perceptron (MLP, naïve Bayes (NB, reduced error pruning tree (REPT, and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP. Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1
Chronobiology: relevance for tuberculosis.
Santos, Lígia Gabrielle; Pires, Gabriel Natan; Azeredo Bittencourt, Lia Rita; Tufik, Sergio; Andersen, Monica Levy
2012-07-01
Despite the knowledge concerning the pathogenesis of tuberculosis, this disease remains one of the most important causes of mortality worldwide. Several risk factors are well-known, such poverty, HIV infection, and poor nutrition, among others. However, some issues that may influence tuberculosis warrant further investigation. In particular, the chronobiological aspects related to tuberculosis have garnered limited attention. In general, the interface between tuberculosis and chronobiology is manifested in four ways: variations in vitamin D bioavailability, winter conditions, associated infections, and circannual oscillations of lymphocytes activity. Moreover, tuberculosis is related to the following chronobiological factors: seasonality, latitude, photoperiod and radiation. Despite the relevance of these topics, the relationship between them has been weakly reviewed. This review aims to synthesize the studies regarding the association between tuberculosis and chronobiology, as well as urge critical discussion and highlight its applicability to health policies for tuberculosis.
2013-04-16
... Nutrition (HFS- 850), Food and Drug Administration, 5100 Paint Branch Pkwy, College Park, MD 20740, 240-402..., Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 350a(i)) establishes requirements for the nutrient... infant formula, a food that is intended to be the sole source of nutrition for infants and...
Lee, Sang-Yong; Ortega, Antonio
2000-04-01
We address the problem of online rate control in digital cameras, where the goal is to achieve near-constant distortion for each image. Digital cameras usually have a pre-determined number of images that can be stored for the given memory size and require limited time delay and constant quality for each image. Due to time delay restrictions, each image should be stored before the next image is received. Therefore, we need to define an online rate control that is based on the amount of memory used by previously stored images, the current image, and the estimated rate of future images. In this paper, we propose an algorithm for online rate control, in which an adaptive reference, a 'buffer-like' constraint, and a minimax criterion (as a distortion metric to achieve near-constant quality) are used. The adaptive reference is used to estimate future images and the 'buffer-like' constraint is required to keep enough memory for future images. We show that using our algorithm to select online bit allocation for each image in a randomly given set of images provides near constant quality. Also, we show that our result is near optimal when a minimax criterion is used, i.e., it achieves a performance close to that obtained by applying an off-line rate control that assumes exact knowledge of the images. Suboptimal behavior is only observed in situations where the distribution of images is not truly random (e.g., if most of the 'complex' images are captured at the end of the sequence.) Finally, we propose a T- step delay rate control algorithm and using the result of 1- step delay rate control algorithm, we show that this algorithm removes the suboptimal behavior.
2015-12-15
Imager (GUVI) onboard of the NASA/TIMED satellite. Definition of NCAR’s Role : NCAR PI H.-L. Liu will help the PI (F. Sassi) interfacing the DAS...and delivering WACCM-X model to the overall project PI, Dr. F. Sassi and the NRL team ; (2) enabling coupling of WACCM-X with the NAVDAS system... team members in the validation of the thermospheric products. Accomplishments A. WACCM-X Development: The NCAR Whole Atmosphere Community Climate
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Minimum description length synthetic aperture radar image segmentation.
Galland, Frédéric; Bertaux, Nicolas; Réfrégier, Philippe
2003-01-01
We present a new minimum description length (MDL) approach based on a deformable partition--a polygonal grid--for automatic segmentation of a speckled image composed of several homogeneous regions. The image segmentation thus consists in the estimation of the polygonal grid, or, more precisely, its number of regions, its number of nodes and the location of its nodes. These estimations are performed by minimizing a unique MDL criterion which takes into account the probabilistic properties of speckle fluctuations and a measure of the stochastic complexity of the polygonal grid. This approach then leads to a global MDL criterion without an undetermined parameter since no other regularization term than the stochastic complexity of the polygonal grid is necessary and noise parameters can be estimated with maximum likelihood-like approaches. The performance of this technique is illustrated on synthetic and real synthetic aperture radar images of agricultural regions and the influence of different terms of the model is analyzed.
Hu, Jing; Hu, Jie; Wang, Yuanmei
2003-03-01
In magnetoencepholography(MEG) inverse research, according to the point source model and distributed source model, the neuromagnetic source reconstruction methods are classified as parametric current dipole localization and nonparametric source imaging (or current density reconstruction). MEG source imaging technique can be formulated as an inherent ill-posed and highly underdetermined linear inverse problem. In order to yield a robust and plausible neural current distribution image, various approaches have been proposed. Among those, the weighted minimum-norm estimation with Tikhonov regularization is a popular technique. The authors present a relatively overall theoretical framework Followed by a discussion of the development, several regularized minimum-norm algorithms have been described in detail, including the depth normalization, low resolution electromagnetic tomography(LORETA), focal underdetermined system solver(FOCUSS), selective minimum-norm(SMN). In addition, some other imaging methods, e.g., maximum entropy method(MEM), the method incorporating other brain functional information such as fMRI data and maximum a posteriori(MAP) method using Markov random field model, are explained as well. From the generalized point of view based on minimum-norm estimation with Tikhonov regularization, all these algorithms are aiming to resolve the tradeoff between fidelity to the measured data and the constraints assumptions about the neural source configuration such as anatomical and physiological information. In conclusion, almost all the source imaging approaches can be consistent with the regularized minimum-norm estimation to some extent.
Deep solar minimum and global climate changes
Ahmed A. Hady
2013-05-01
Full Text Available This paper examines the deep minimum of solar cycle 23 and its potential impact on climate change. In addition, a source region of the solar winds at solar activity minimum, especially in the solar cycle 23, the deepest during the last 500 years, has been studied. Solar activities have had notable effect on palaeoclimatic changes. Contemporary solar activity are so weak and hence expected to cause global cooling. Prevalent global warming, caused by building-up of green-house gases in the troposphere, seems to exceed this solar effect. This paper discusses this issue.
A minimum achievable PV electrical generating cost
Sabisky, E.S. [11 Carnation Place, Lawrenceville, NJ 08648 (United States)
1996-03-22
The role and share of photovoltaic (PV) generated electricity in our nation`s future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price
Weight-Constrained Minimum Spanning Tree Problem
Henn, Sebastian Tobias
2007-01-01
In an undirected graph G we associate costs and weights to each edge. The weight-constrained minimum spanning tree problem is to find a spanning tree of total edge weight at most a given value W and minimum total costs under this restriction. In this thesis a literature overview on this NP-hard problem, theoretical properties concerning the convex hull and the Lagrangian relaxation are given. We present also some in- and exclusion-test for this problem. We apply a ranking algorithm and the me...
XMM-Newton Observations of the 2003 X-Ray Minimum of Eta Carinae
Hamaguchi, K.; Corcoran, M. F.; White, N. E.; Damineli, A.; Davidson, K.; Gull, T. R.
2004-01-01
The XMM-Newton X-ray observatory took part in the multi-wavelength observing campaign of the massive, evolved star Eta Carinae in 2003 during its recent X-ray minimum in June 2003. This paper reports on the first results of these observations, which were performed (1) before the minimum (five times in January, 2003), (2) near the X-ray maximum just before the minimum (two times in June) and (3) during the minimum (four times in July-August). Hard X-ray emission from the point source of Eta Carinae was detected even during the minimum. The observed flux above 3 keV was approx. 3x10(exp -12) ergs cm(exp -2)/s, which is about one percent of the flux before the minimum. Light curves from the individual observations show no time variability on the scale of a few kilo-seconds. Changes in the spectral shape occurred, but these changes were smaller than expected if the minimum is produced solely by an increase of hydrogen column density. Fits of the hard X-Ray source by an absorbed 1T model show a constant plasma temperature at around 5 keV and an increase of column density from 5x10(exp 22) cm(exp -2) to 2x10(exp 23) cm(exp -2). The spectra below 6 keV significantly deviate from the models that fit the higher energy emission. The X-ray minimum seems to be dominated by an apparent decrease of the emission measure, suggesting that the brightest part of the X-ray emitting region is completely obscured during the minimum in the form of an eclipse. Partial covering plasma emission models might be considered for the spectral variation. The spectra also showed strong iron K line emission from both hot and cold gases, and weak line emission from Ni, Ca, Ar, S and Si.
Constructing minimum-cost flow-dependent networks
Thomas, Doreen A.; Weng, Jia F.
2002-09-01
In the construction of a communication network, the length of the network is an important but not unique factor determining the cost of the network. Among many possible network models, Gilbert proposed a flow-dependent model in which flow demands are assigned between each pair of points in a given point set A, and the cost per unit length of a link in the network is a function of the flow through the link. In this paper we first investigate the properties of this Gilbert model: the concavity of the cost function, decomposition, local minimality, the number of Steiner points and the maximum degree of Steiner points. Then we propose three heuristics for constructing minimum cost Gilbert networks. Two of them come from the fact that generally a minimum cost Gilbert network stands between two extremes: the complete network G(A) on A and the edge-weighted Steiner minimal tree W(A) on A. The first heuristic starts with G(A) and reduces the cost by splitting angles; the second one starts with both G(A) and W(A), and reduces the cost by selecting low cost paths. As a generalisation of the second heuristic, the third heuristic constructs a new Gilbert network of less cost by hybridising known Gilbert networks. Finally we discuss some considerations in practical applications.
Phylogenetic Applications of the Minimum Contradiction Approach on Continuous Characters
Marc Thuillard
2009-01-01
Full Text Available We describe the conditions under which a set of continuous variables or characters can be described as an X-tree or a split network. A distance matrix corresponds exactly to a split network or a valued X-tree if, after ordering of the taxa, the variables values can be embedded into a function with at most a local maximum and a local minimum, and crossing any horizontal line at most twice. In real applications, the order of the taxa best satisfying the above conditions can be obtained using the Minimum Contradiction method. This approach is applied to 2 sets of continuous characters. The first set corresponds to craniofacial landmarks in Hominids. The contradiction matrix is used to identify possible tree structures and some alternatives when they exist. We explain how to discover the main structuring characters in a tree. The second set consists of a sample of 100 galaxies. In that second example one shows how to discretize the continuous variables describing physical properties of the galaxies without disrupting the underlying tree structure.
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Binary cluster collision dynamics and minimum energy conformations
Muñoz, Francisco [Max Planck Institute of Microstructure Physics, Weinberg 2, 06120 Halle (Germany); Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Centro para el Desarrollo de la Nanociencia y Nanotecnología, CEDENNA, Avenida Ecuador 3493, Santiago (Chile); Rogan, José; Valdivia, J.A. [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Centro para el Desarrollo de la Nanociencia y Nanotecnología, CEDENNA, Avenida Ecuador 3493, Santiago (Chile); Varas, A. [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Nano-Bio Spectroscopy Group, ETSF Scientific Development Centre, Departamento de Física de Materiales, Universidad del País Vasco UPV/EHU, Av. Tolosa 72, E-20018 San Sebastián (Spain); Kiwi, Miguel, E-mail: m.kiwi.t@gmail.com [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Centro para el Desarrollo de la Nanociencia y Nanotecnología, CEDENNA, Avenida Ecuador 3493, Santiago (Chile)
2013-10-15
The collision dynamics of one Ag or Cu atom impinging on a Au{sub 12} cluster is investigated by means of DFT molecular dynamics. Our results show that the experimentally confirmed 2D to 3D transition of Au{sub 12}→Au{sub 13} is mostly preserved by the resulting planar Au{sub 12}Ag and Au{sub 12}Cu minimum energy clusters, which is quite remarkable in view of the excess energy, well larger than the 2D–3D potential barrier height. The process is accompanied by a large s−d hybridization and charge transfer from Au to Ag or Cu. The dynamics of the collision process mainly yields fusion of projectile and target, however scattering and cluster fragmentation also occur for large energies and large impact parameters. While Ag projectiles favor fragmentation, Cu favors scattering due to its smaller mass. The projectile size does not play a major role in favoring the fragmentation or scattering channels. By comparing our collision results with those obtained by an unbiased minimum energy search of 4483 Au{sub 12}Ag and 4483 Au{sub 12}Cu configurations obtained phenomenologically, we find that there is an extra bonus: without increase of computer time collisions yield the planar lower energy structures that are not feasible to obtain using semi-classical potentials. In fact, we conclude that phenomenological potentials do not even provide adequate seeds for the search of global energy minima for planar structures. Since the fabrication of nanoclusters is mainly achieved by synthesis or laser ablation, the set of local minima configurations we provide here, and their distribution as a function of energy, are more relevant than the global minimum to analyze experimental results obtained at finite temperatures, and is consistent with the dynamical coexistence of 2D and 3D liquid Au clusters conformations obtained previously.
Minimum training requirement in ultrasound imaging of peripheral arterial disease
Eiberg, J P; Hansen, M A; Grønvall Rasmussen, J B
2008-01-01
To demonstrate the minimum training requirement when performing ultrasound of peripheral arterial disease.......To demonstrate the minimum training requirement when performing ultrasound of peripheral arterial disease....
Socheslav D. P.
2011-04-01
Full Text Available The model of interrelation of reliability indicators and the basic significant parameters of the two-cascade thermoelectric cooling device (TED with consecutive electric connection of cascades is considered. The relations are received allowing to estimate reliability indicators, namely failure rate at construction two-cascade ТED working in current mode, providing the minimum failure rate in a wide range of temperature drops taking into account thermal loading.Possibility of use of this mode when the prevailing requirement is maintenance of the minimum failure rate and the maximum probability of non-failure operation cascade TED is shown.
Auctioning off Prizes for Agricultural Product Flow under a Policy of Minimum Prices
Wilson da Cruz Vieira
2015-12-01
Full Text Available In this article, we analyze the use of descending clock auctions in the implementation of a minimum pricing policy. This type of auction has been used by the Brazilian government policy of support for agricultural prices. We propose a clock auction model along the lines used by the Brazilian government and derive its main implications. Based on data from auctions already held and implications of the theoretical model, we conclude that the following factors are crucial to minimize the costs of implementing a minimum price policy via auctions: the choice of product to be auctioned, amount auctioned, reserve price (maximum prize, and auction rules.
Reliable Steganalysis Using a Minimum Set of Samples and Features
Bas Patrick
2009-01-01
Full Text Available This paper proposes to determine a sufficient number of images for reliable classification and to use feature selection to select most relevant features for achieving reliable steganalysis. First dimensionality issues in the context of classification are outlined, and the impact of the different parameters of a steganalysis scheme (the number of samples, the number of features, the steganography method, and the embedding rate is studied. On one hand, it is shown that, using Bootstrap simulations, the standard deviation of the classification results can be very important if too small training sets are used; moreover a minimum of 5000 images is needed in order to perform reliable steganalysis. On the other hand, we show how the feature selection process using the OP-ELM classifier enables both to reduce the dimensionality of the data and to highlight weaknesses and advantages of the six most popular steganographic algorithms.
Completeness properties of the minimum uncertainty states
Trifonov, D. A.
1993-01-01
The completeness properties of the Schrodinger minimum uncertainty states (SMUS) and of some of their subsets are considered. The invariant measures and the resolution unity measures for the set of SMUS are constructed and the representation of squeezing and correlating operators and SMUS as superpositions of Glauber coherent states on the real line is elucidated.
Minimum Wage Effects throughout the Wage Distribution
Neumark, David; Schweitzer, Mark; Wascher, William
2004-01-01
This paper provides evidence on a wide set of margins along which labor markets can adjust in response to increases in the minimum wage, including wages, hours, employment, and ultimately labor income. Not surprisingly, the evidence indicates that low-wage workers are most strongly affected, while higher-wage workers are little affected. Workers…
A Minimum Relative Entropy Principle for AGI
Ven, Antoine van de; Schouten, Ben
2010-01-01
In this paper the principle of minimum relative entropy (PMRE) is proposed as a fundamental principle and idea that can be used in the field of AGI. It is shown to have a very strong mathematical foundation, that it is even more fundamental then Bayes rule or MaxEnt alone and that it can be related
What's Happening in Minimum Competency Testing.
Frahm, Robert; Covington, Jimmie
An examination of the current status of minimum competency testing is presented in a series of short essays, which discuss case studies of individual school systems and state approaches. Sections are also included on the viewpoints of critics and supporters, teachers and teacher organizations, principals and students, and the federal government.…
Statistical inference of Minimum Rank Factor Analysis
Shapiro, A; Ten Berge, JMF
2002-01-01
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the
Minimum Bias and Underlying Event at CMS
Fano, Livio
2006-01-01
The prospects of measuring minimum bias collisions (MB) and studying the underlying event (UE) at CMS are discussed. Two methods are described. The first is based on the measurement of charged tracks in the transverse region with respect to a charge-particle jet. The second technique relies on the selection of muon-pair events from Drell-Yan process.
44 CFR 62.6 - Minimum commissions.
2010-10-01
... ADJUSTMENT OF CLAIMS Issuance of Policies § 62.6 Minimum commissions. (a) The earned commission which shall be paid to any property or casualty insurance agent or broker duly licensed by a state insurance regulatory authority, with respect to each policy or renewal the agent duly procures on behalf of the...
Context quantization by minimum adaptive code length
Forchhammer, Søren; Wu, Xiaolin
2007-01-01
Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols...
A Minimum Relative Entropy Principle for AGI
B.A.M. Ben Schouten; Antoine van de van de Ven
2010-01-01
In this paper the principle of minimum relative entropy (PMRE) is proposed as a fundamental principle and idea that can be used in the field of AGI. It is shown to have a very strong mathematical foundation, that it is even more fundamental then Bayes rule or MaxEnt alone and that it can be related
Statistical inference of Minimum Rank Factor Analysis
Shapiro, A; Ten Berge, JMF
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the
Time Crystals from Minimum Time Uncertainty
Faizal, Mir; Das, Saurya
2016-01-01
Motivated by the Generalized Uncertainty Principle, covariance, and a minimum measurable time, we propose a deformation of the Heisenberg algebra, and show that this leads to corrections to all quantum mechanical systems. We also demonstrate that such a deformation implies a discrete spectrum for time. In other words, time behaves like a crystal.
Minimum impact house prototype for sustainable building
Drexler, H.; Jauslin, D.
2010-01-01
The Minihouse is a prototupe for a sustainable townhouse. On a site of only 29 sqm it offers 154 sqm of urban life. The project 'Minimum Impact House' adresses two important questions: How do we provide living space in the cities without distroying the landscape? How to improve sustainably the ecolo
ASSESSMENT OF ANNUAL MINIMUM TEMPERATURE IN SOME ...
USER
2016-04-11
Apr 11, 2016 ... This work attempts investigating the pattern of minimum temperature from 19 1 to 2006, an attempt was also .... Similarly the heavy cloud cover acts as blanket for terrestrial ... within a General Circulation Model. (GCM) can be ...
Minimum Competency Testing--Grading or Evaluation?
Prakash, Madhu Suri
The consequences of the minimum competency testing movement may bring into question the basic assumptions, goals, and expectations of our school system. The intended use of these tests is the assessment of students; the unintended consequence may be the assessment of the school system. There are two ways in which schools may fail in the context of…
Minimum intervention dentistry: periodontics and implant dentistry.
Darby, I B; Ngo, L
2013-06-01
This article will look at the role of minimum intervention dentistry in the management of periodontal disease. It will discuss the role of appropriate assessment, treatment and risk factors/indicators. In addition, the role of the patient and early intervention in the continuing care of dental implants will be discussed as well as the management of peri-implant disease.
Minimum output entropy of Gaussian channels
Lloyd, S; Maccone, L; Pirandola, S; Garcia-Patron, R
2009-01-01
We show that the minimum output entropy for all single-mode Gaussian channels is additive and is attained for Gaussian inputs. This allows the derivation of the channel capacity for a number of Gaussian channels, including that of the channel with linear loss, thermal noise, and linear amplification.
7 CFR 35.13 - Minimum quantity.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Minimum quantity. 35.13 Section 35.13 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS EXPORT...
7 CFR 33.10 - Minimum requirements.
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Regulations § 33.10 Minimum requirements. No person shall... Early: Provided, That apples for export to Pacific ports of Russia shall grade at least U.S. Utility...
Maximum detection range limitation of pulse laser radar with Geiger-mode avalanche photodiode array
Luo, Hanjun; Xu, Benlian; Xu, Huigang; Chen, Jingbo; Fu, Yadan
2015-05-01
When designing and evaluating the performance of laser radar system, maximum detection range achievable is an essential parameter. The purpose of this paper is to propose a theoretical model of maximum detection range for simulating the Geiger-mode laser radar's ranging performance. Based on the laser radar equation and the requirement of the minimum acceptable detection probability, and assuming the primary electrons triggered by the echo photons obey Poisson statistics, the maximum range theoretical model is established. By using the system design parameters, the influence of five main factors, namely emitted pulse energy, noise, echo position, atmospheric attenuation coefficient, and target reflectivity on the maximum detection range are investigated. The results show that stronger emitted pulse energy, lower noise level, more front echo position in the range gate, higher atmospheric attenuation coefficient, and higher target reflectivity can result in greater maximum detection range. It is also shown that it's important to select the minimum acceptable detection probability, which is equivalent to the system signal-to-noise ratio for producing greater maximum detection range and lower false-alarm probability.
Alomair O.
2015-11-01
Full Text Available Miscible gas injection is one of the most important enhanced oil recovery (EOR approaches for increasing oil recovery. Due to the massive cost associated with this approach a high degree of accuracy is required for predicting the outcome of the process. Such accuracy includes, the preliminary screening parameters for gas miscible displacement; the “Minimum Miscibility Pressure” (MMP and the availability of the gas. All conventional and stat-of-art MMP measurement methods are either time consuming or decidedly cost demanding processes. Therefore, in order to address the immediate industry demands a nonparametric approach, Alternating Conditional Expectation (ACE, is used in this study to estimate MMP. This algorithm Breiman and Friedman [Brieman L., Friedman J.H. (1985 J. Am. Stat. Assoc. 80, 391, 580-619]estimates the transformations of a set of predictors (here C1, C2, C3, C4, C5, C6, C7+, CO2, H2S, N2, Mw5+, Mw7+ and T and a response (here MMP that produce the maximum linear effect between these transformed variables. One hundred thirteen MMP data points are considered both from the relevant published literature and the experimental work. Five MMP measurements for Kuwaiti Oil are included as part of the testing data. The proposed model is validated using detailed statistical analysis; a reasonably good value of correlation coefficient 0.956 is obtained as compare to the existing correlations. Similarly, standard deviation and average absolute error values are at the lowest as 139 psia (8.55 bar and 4.68% respectively. Hence, it reveals that the results are more reliable than the existing correlations for pure CO2 injection to enhance oil recovery. In addition to its accuracy, the ACE approach is more powerful, quick and can handle a huge data.
Paleoceanographic insights on recent oxygen minimum zone expansion: lessons for modern oceanography.
Sarah E Moffitt
Full Text Available Climate-driven Oxygen Minimum Zone (OMZ expansions in the geologic record provide an opportunity to characterize the spatial and temporal scales of OMZ change. Here we investigate OMZ expansion through the global-scale warming event of the most recent deglaciation (18-11 ka, an event with clear relevance to understanding modern anthropogenic climate change. Deglacial marine sediment records were compiled to quantify the vertical extent, intensity, surface area and volume impingements of hypoxic waters upon continental margins. By integrating sediment records (183-2,309 meters below sea level; mbsl containing one or more geochemical, sedimentary or microfossil oxygenation proxies integrated with analyses of eustatic sea level rise, we reconstruct the timing, depth and intensity of seafloor hypoxia. The maximum vertical OMZ extent during the deglaciation was variable by region: Subarctic Pacific (~600-2,900 mbsl, California Current (~330-1,500 mbsl, Mexico Margin (~330-830 mbsl, and the Humboldt Current and Equatorial Pacific (~110-3,100 mbsl. The timing of OMZ expansion is regionally coherent but not globally synchronous. Subarctic Pacific and California Current continental margins exhibit tight correlation to the oscillations of Northern Hemisphere deglacial events (Termination IA, Bølling-Allerød, Younger Dryas and Termination IB. Southern regions (Mexico Margin and the Equatorial Pacific and Humboldt Current exhibit hypoxia expansion prior to Termination IA (~14.7 ka, and no regional oxygenation oscillations. Our analyses provide new evidence for the geographically and vertically extensive expansion of OMZs, and the extreme compression of upper-ocean oxygenated ecosystems during the geologically recent deglaciation.
Paleoceanographic insights on recent oxygen minimum zone expansion: lessons for modern oceanography.
Moffitt, Sarah E; Moffitt, Russell A; Sauthoff, Wilson; Davis, Catherine V; Hewett, Kathryn; Hill, Tessa M
2015-01-01
Climate-driven Oxygen Minimum Zone (OMZ) expansions in the geologic record provide an opportunity to characterize the spatial and temporal scales of OMZ change. Here we investigate OMZ expansion through the global-scale warming event of the most recent deglaciation (18-11 ka), an event with clear relevance to understanding modern anthropogenic climate change. Deglacial marine sediment records were compiled to quantify the vertical extent, intensity, surface area and volume impingements of hypoxic waters upon continental margins. By integrating sediment records (183-2,309 meters below sea level; mbsl) containing one or more geochemical, sedimentary or microfossil oxygenation proxies integrated with analyses of eustatic sea level rise, we reconstruct the timing, depth and intensity of seafloor hypoxia. The maximum vertical OMZ extent during the deglaciation was variable by region: Subarctic Pacific (~600-2,900 mbsl), California Current (~330-1,500 mbsl), Mexico Margin (~330-830 mbsl), and the Humboldt Current and Equatorial Pacific (~110-3,100 mbsl). The timing of OMZ expansion is regionally coherent but not globally synchronous. Subarctic Pacific and California Current continental margins exhibit tight correlation to the oscillations of Northern Hemisphere deglacial events (Termination IA, Bølling-Allerød, Younger Dryas and Termination IB). Southern regions (Mexico Margin and the Equatorial Pacific and Humboldt Current) exhibit hypoxia expansion prior to Termination IA (~14.7 ka), and no regional oxygenation oscillations. Our analyses provide new evidence for the geographically and vertically extensive expansion of OMZs, and the extreme compression of upper-ocean oxygenated ecosystems during the geologically recent deglaciation.
2007-01-01
In October 2006, the Economic Policy Institute released a â€œRaise the Minimum Wageâ€ statement signed by more than 650 individuals. Using an open-ended, non-anonymous questionnaire, we asked the signatories to explain their thinking on the issue. The questionnaire asked about the specific mechanisms at work, possible downsides, and whether the minimum wage violates liberty. Ninety-five participated. This article reports the responses. It also summarizes findings from minimum-wage surveys sin...
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel
2016-11-01
The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Minimum Wage Laws and the Distribution of Employment.
Lang, Kevin
The desirability of raising the minimum wage long revolved around just one question: the effect of higher minimum wages on the overall level of employment. An even more critical effect of the minimum wage rests on the composition of employment--who gets the minimum wage job. An examination of employment in eating and drinking establishments…
14 CFR 25.149 - Minimum control speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Minimum control speed. 25.149 Section 25... Minimum control speed. (a) In establishing the minimum control speeds required by this section, the method... prevent a heading change of more than 20 degrees. (e) VMCG, the minimum control speed on the ground,...
49 CFR 538.5 - Minimum driving range.
2010-10-01
... 49 Transportation 6 2010-10-01 2010-10-01 false Minimum driving range. 538.5 Section 538.5... Minimum driving range. (a) The minimum driving range that a passenger automobile must have in order to be... electricity. (b) The minimum driving range that a passenger automobile using electricity as an...
12 CFR 3.6 - Minimum capital ratios.
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Minimum capital ratios. 3.6 Section 3.6 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Minimum Capital Ratios § 3.6 Minimum capital ratios. (a) Risk-based capital ratio....
Minimum and terminal velocities in projectile motion
Miranda, E N; Riba, R
2012-01-01
The motion of a projectile with horizontal initial velocity V0, moving under the action of the gravitational field and a drag force is studied analytically. As it is well known, the projectile reaches a terminal velocity Vterm. There is a curious result concerning the minimum speed Vmin; it turns out that the minimum velocity is lower than the terminal one if V0 > Vterm and is lower than the initial one if V0 < Vterm. These results show that the velocity is not a monotonous function. If the initial speed is not horizontal, there is an angle range where the velocity shows the same behavior mentioned previously. Out of that range, the volocity is a monotonous function. These results come out from numerical simulations.
Bistable dielectric elastomer minimum energy structures
Zhao, Jianwen; Wang, Shu; McCoul, David; Xing, Zhiguang; Huang, Bo; Liu, Liwu; Leng, Jinsong
2016-07-01
Dielectric elastomer minimum energy structures (DEMES) can realize large angular deformations by small voltage-induced strains, which make them an attractive candidate for use as soft actuators. If the task only needs binary action, the bistable structure will be an efficient solution and can save energy because it requires only a very short duration of voltage to switch its state. To obtain bistable DEMES, a method to realize the two stable states of traditional DEMES is provided in this paper. Based on this, a type of symmetrical bistable DEMES is proposed, and the required actuation pulse duration is shorter than 0.1 s. When a suitable mass is attached to end of the DEMES, or two layers of dielectric elastomer are affixed to both sides of the primary frame, the DEMES can realize two stable states and can be switched by a suitable pulse duration. To calculate the required minimum pulse duration, a mathematical model is provided and validated by experiment.
Minimum Energy Demand Locomotion on Space Station
Wing Kwong Chung
2013-01-01
Full Text Available The energy of a space station is a precious resource, and the minimization of energy consumption of a space manipulator is crucial to maintain its normal functionalities. This paper first presents novel gaits for space manipulators by equipping a new gripping mechanism. With the use of wheels locomotion, lower energy demand gaits can be achieved. With the use of the proposed gaits, we further develop a global path planning algorithm for space manipulators which can plan a moving path on a space station with a minimum total energy demand. Different from existing approaches, we emphasize both the use of the proposed low energy demand gaits and the gaits composition during the path planning process. To evaluate the performance of the proposed gaits and path planning algorithm, numerous simulations are performed. Results show that the energy demand of both the proposed gaits and the resultant moving path is also minimum.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Examining Different Regions of Relevance: From Highly Relevant to Not Relevant.
Spink, Amanda; Greisdorf, Howard; Bateman, Judy
1998-01-01
Proposes a useful concept of relevance as a relationship and an effect on the movement of a user through the iterative stages of their information seeking process, and that users' relevance judgments can be plotted on a Three-Dimensional Spatial Model of Relevance Level, Degree and Time. Discusses implications for the development of information…
Minimum quality standards and international trade
Baltzer, Kenneth Thomas
2011-01-01
This paper investigates the impact of a non-discriminating minimum quality standard (MQS) on trade and welfare when the market is characterized by imperfect competition and asymmetric information. A simple partial equilibrium model of an international Cournot duopoly is presented in which...... prefer different levels of regulation. As a result, international trade disputes are likely to arise even when regulation is non-discriminating....
Proposed production test for reducing minimum downtime
Jaklevick, J.F.
1961-11-29
The object of the production test described in this report is to evaluate the operational aspects of a proposed method for reducing minimum downtime. The excess xenon poisoning, which occurs during the first 32--38 hours after the shutdown of a reactor from present equilibrium levels, will be partially overridden by a central enriched zone whose added reactivity contribution would be compensated during normal operation by means of poison splines.
Minimum Description Length Shape and Appearance Models
Thodberg, Hans Henrik
2003-01-01
The Minimum Description Length (MDL) approach to shape modelling is reviewed. It solves the point correspondence problem of selecting points on shapes defined as curves so that the points correspond across a data set. An efficient numerical implementation is presented and made available as open s...... source Matlab code. The problems with the early MDL approaches are discussed. Finally the MDL approach is extended to an MDL Appearance Model, which is proposed as a means to perform unsupervised image segmentation....
Minimum degree and density of binary sequences
Brandt, Stephan; Müttel, J.; Rautenbach, D.
2010-01-01
For d,k∈N with k ≤ 2d, let g(d,k) denote the infimum density of binary sequences (x)∈{0,1} which satisfy the minimum degree condition σ(x+) ≥ k for all i∈Z with xi=1. We reduce the problem of computing g(d,k) to a combinatorial problem related to the generalized k-girth of a graph G which...
Time crystals from minimum time uncertainty
Faizal, Mir; Khalil, Mohammed M.; Das, Saurya
2016-01-01
Motivated by the Generalized Uncertainty Principle, covariance, and a minimum measurable time, we propose a deformation of the Heisenberg algebra and show that this leads to corrections to all quantum mechanical systems. We also demonstrate that such a deformation implies a discrete spectrum for time. In other words, time behaves like a crystal. As an application of our formalism, we analyze the effect of such a deformation on the rate of spontaneous emission in a hydrogen atom.
Time crystals from minimum time uncertainty
Faizal, Mir [University of Waterloo, Department of Physics and Astronomy, Waterloo, ON (Canada); Khalil, Mohammed M. [Alexandria University, Department of Electrical Engineering, Alexandria (Egypt); Das, Saurya [University of Lethbridge, Department of Physics and Astronomy, Lethbridge, AB (Canada)
2016-01-15
Motivated by the Generalized Uncertainty Principle, covariance, and a minimum measurable time, we propose a deformation of the Heisenberg algebra and show that this leads to corrections to all quantum mechanical systems. We also demonstrate that such a deformation implies a discrete spectrum for time. In other words, time behaves like a crystal. As an application of our formalism, we analyze the effect of such a deformation on the rate of spontaneous emission in a hydrogen atom. (orig.)
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
van der Laan, Mark; Gruber, Susan
2016-05-01
Consider a study in which one observes n independent and identically distributed random variables whose probability distribution is known to be an element of a particular statistical model, and one is concerned with estimation of a particular real valued pathwise differentiable target parameter of this data probability distribution. The targeted maximum likelihood estimator (TMLE) is an asymptotically efficient substitution estimator obtained by constructing a so called least favorable parametric submodel through an initial estimator with score, at zero fluctuation of the initial estimator, that spans the efficient influence curve, and iteratively maximizing the corresponding parametric likelihood till no more updates occur, at which point the updated initial estimator solves the so called efficient influence curve equation. In this article we construct a one-dimensional universal least favorable submodel for which the TMLE only takes one step, and thereby requires minimal extra data fitting to achieve its goal of solving the efficient influence curve equation. We generalize these to universal least favorable submodels through the relevant part of the data distribution as required for targeted minimum loss-based estimation. Finally, remarkably, given a multidimensional target parameter, we develop a universal canonical one-dimensional submodel such that the one-step TMLE, only maximizing the log-likelihood over a univariate parameter, solves the multivariate efficient influence curve equation. This allows us to construct a one-step TMLE based on a one-dimensional parametric submodel through the initial estimator, that solves any multivariate desired set of estimating equations.
What causes geomagnetic activity during sunspot minimum
Kirov, Boian; Georgieva, Katya; Obridko, Vladimir
2014-01-01
The average geomagnetic activity during sunspot minimum has been continuously decreasing in the last four cycles. The geomagnetic activity is caused by both interplanetary disturbances - coronal mass ejections and high speed solar wind streams, and the background solar wind over which these disturbances ride. We show that the geomagnetic activity in cycle minimum does not depend on the number and parameters of coronal mass ejections or high speed solar wind streams, but on the background solar wind. The background solar wind has two components: slower and faster. The source of the slower component is the heliospheric current sheet, and of the faster one the polar coronal holes. It is supposed that the geomagnetic activity in cycle minimum is determined by the thickness of the heliospheric current sheet which is related to the portions of time the Earth spends in slow and in fast solar wind. We demonstrate that it is also determined by the parameters of these two components of the background solar wind which v...
The minimum entropy principle and task performance.
Guastello, Stephen J; Gorin, Hillary; Huschen, Samuel; Peters, Natalie E; Fabisch, Megan; Poston, Kirsten; Weinberger, Kelsey
2013-07-01
According to the minimum entropy principle, efficient cognitive performance is produced with a neurocognitive strategy that involves a minimum of degrees of freedom. Although high performance is often regarded as consistent performance as well, some variability in performance still remains which allows the person to adapt to changing goal conditions or fatigue. The present study investigated the connection between performance, entropy in performance, and four task-switching strategies. Fifty-one undergraduates performed 7 different computer-based cognitive tasks producing sets of 49 responses under instructional conditions requiring task quotas or no quotas. The temporal patterns of performance were analyzed using orbital decomposition to extract pattern types and lengths, which were then compared with regard to Shannon entropy, topological entropy, and overall performance. Task switching strategies from a previous study were available for the same participants as well. Results indicated that both topological entropy and Shannon entropy were negatively correlated with performance. Some task-switching strategies produced lower entropy in performance than others. Stepwise regression showed that the top three predictors of performance were Shannon entropy and arithmetic and spatial abilities. Additional implications for the prediction of work performance with cognitive ability measurements and the applicability of the minimum entropy principle to multidimensional performance criteria and team work are discussed.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Recent advance on the efficiency at maximum power of heat engines
Tu Zhan-Chun
2012-01-01
This review reports several key advances on the theoretical investigations of efficiency at maximum power of heat engines in the past five years.The analytical results of efficiency at maximum power for the Curzon-Ahlborn heat engine,the stochastic heat engine constructed from a Brownian particle,and Feynman's ratchet as a heat engine are presented.It is found that:the efficiency at maximum power exhibits universal behavior at small relative temperature differences; the lower and the upper bounds might exist under quite general conditions; and the problem of efficiency at maximum power comes down to seeking for the minimum irreversible entropy production in each finite-time isothermal process for a given time.
Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space
Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf;
2012-01-01
The stretch factor and maximum detour of a graph G embedded in a metric space measure how well G approximates the minimum complete graph containing G and the metric space, respectively. In this paper we show that computing the stretch factor of a rectilinear path in L 1 plane has a lower bound of Ω......(n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ ... compute the stretch factor or maximum detour of trees and cycles in O(σn log d+1 n) time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane. © 2012 World Scientific...
Optimizing Reactors Selection and Sequencing:Minimum Cost versus Minimum Volume
Rachid Chebbi
2014-01-01
The present investigation targets minimum cost of reactors in series for the case of one single chemical reaction, considering plug flow and stirred tank reactor(s) in the sequence of flow reactors. Using Guthrie’s cost correlations three typical cases were considered based on the profile of the reaction rate reciprocal versus conversion. Significant differences were found compared to the classical approach targeting minimum total reactor volume.
The Wage and Employment Dynamics of Minimum Wage Workers
William E. Even; Macpherson, David A.
2004-01-01
This study uses 20 years of short panel data sets on minimum wage workers to examine the wage and employment dynamics of minimum wage workers. Compared to workers earning above the minimum wage, minimum wage workers differ substantially in several ways. First, minimum wage workers are much more likely to be new entrants and much more likely to exit the labor market. Second, changes in industry and occupation and access to job training are particularly important to improving the wages of minim...
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Employees subject to minimum wage or minimum wage and... Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS RECORDS TO BE KEPT BY EMPLOYERS General Requirements § 516.2 Employees subject to minimum wage or minimum wage...
On the Hybrid Minimum Principle On Lie Groups and the Exponential Gradient HMP Algorithm
Taringoo, Farzin; Caines, Peter E.
2013-01-01
This paper provides a geometrical derivation of the Hybrid Minimum Principle (HMP) for autonomous hybrid systems whose state manifolds constitute Lie groups $(G,\\star)$ which are left invariant under the controlled dynamics of the system, and whose switching manifolds are defined as smooth embedded time invariant submanifolds of $G$. The analysis is expressed in terms of extremal (i.e. optimal) trajectories on the cotangent bundle of the state manifold $G$. The Hybrid Maximum Principle (HMP) ...
Minimum extreme temperature in the gulf of mexico: is there a connection with solar activity?
Maravilla, D.; Mendoza, B.; Jauregui, E.
Minimum extreme temperature ( MET) series from several meteorological stations of the Gulf of Mexico are spectrally analyzed using the Maximum Entrophy Method. We obtained periodicities similar to those found in the sunspot number, the magnetic solar cycle, comic ray fluxes and geomagnetic activity which are modulated by solar activity. We suggested that the solar signal is perhaps present in the MET record of this region of Mexico.
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
Active and Passive Minimum of Grammar in Teaching a Foreign Language
Aleksandras Velička
2011-04-01
Full Text Available In this article the problem of active and passive minimum of grammar is analysed alongside with the questions relevant to this problem: the aspects of selection of grammar material, its presentation, teaching, consolidation and realization in communicational process. The author, while analysing students mistakes, states that the main criteria for selecting passive minimum of grammar must be the frequency of the subject and the possibilities of its usage in a definite text. The main factor for choosing active grammar minimum is communicative value of grammar. In the article it has been stated that different grammar exercises are needed for receptive and reproductive learning. The possibilities of grammar material presentation, consolidation and realization in a concrete situation are also touched on in the article.
User perspectives on relevance criteria
Maglaughlin, Kelly L.; Sonnenwald, Diane H.
2002-01-01
matter, thought catalyst), full text (e.g., audience, novelty, type, possible content, utility), journal/publisher (e.g., novelty, main focus, perceived quality), and personal (e.g., competition, time requirements). Results further indicate that multiple criteria are used when making relevant, partially...... relevant, and not-relevant judgments, and that most criteria can have either a positive or negative contribution to the relevance of a document. The criteria most frequently mentioned by study participants were content, followed by criteria characterizing the full text document. These findings may have...... implications for relevance feedback in information retrieval systems, suggesting that systems accept and utilize multiple positive and negative relevance criteria from users. Systems designers may want to focus on supporting content criteria followed by full text criteria as these may provide the greatest cost...
Relevance-driven Pragmatic Inferences
王瑞彪
2013-01-01
Relevance theory, an inferential approach to pragmatics, claims that the hearer is expected to pick out the input of op-timal relevance from a mass of alternative inputs produced by the speaker in order to interpret the speaker ’s intentions. The de-gree of the relevance of an input can be assessed in terms of cognitive effects and the processing effort. The input of optimal rele-vance is the one yielding the greatest positive cognitive effect and requiring the least processing effort. This paper attempts to as-sess the degrees of the relevance of a mass of alternative inputs produced by an imaginary speaker from the perspective of her cor-responding hearer in terms of cognitive effects and the processing effort with a view to justifying the feasibility of the principle of relevance in pragmatic inferences.
Hyland, D. C.
1983-01-01
A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Goal relevance as a quantitative model of human task relevance.
Tanner, James; Itti, Laurent
2017-03-01
The concept of relevance is used ubiquitously in everyday life. However, a general quantitative definition of relevance has been lacking, especially as pertains to quantifying the relevance of sensory observations to one's goals. We propose a theoretical definition for the information value of data observations with respect to a goal, which we call "goal relevance." We consider the probability distribution of an agent's subjective beliefs over how a goal can be achieved. When new data are observed, its goal relevance is measured as the Kullback-Leibler divergence between belief distributions before and after the observation. Theoretical predictions about the relevance of different obstacles in simulated environments agreed with the majority response of 38 human participants in 83.5% of trials, beating multiple machine-learning models. Our new definition of goal relevance is general, quantitative, explicit, and allows one to put a number onto the previously elusive notion of relevance of observations to a goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Minimum QOS Parameter Set in Transport Layer
汪芸; 顾冠群
1997-01-01
QOS（Quality Of Service)parameter definitions are the basis of further QOS control.But QOS parameters defined by organizations such as ISO and ITU are incoherent and incompatible.It leads to the imefficiency of QOS controls.Based on the analysis of QOS parameters defined by ISO and ITU,this paper first promotes Minimum QOS Parameter Set in transport layer.It demonstrates that the parameters defined by ISO and ITU can be represented b parameters or a combination of parameters of the Set.The paper also expounds that the Set is open and manageable and it can be the potential unified base for QOS parameters.
Quantization of conductance minimum and index theorem
Ikegaya, Satoshi; Suzuki, Shu-Ichiro; Tanaka, Yukio; Asano, Yasuhiro
2016-08-01
We discuss the minimum value of the zero-bias differential conductance Gmin in a junction consisting of a normal metal and a nodal superconductor preserving time-reversal symmetry. Using the quasiclassical Green function method, we show that Gmin is quantized at (4 e2/h ) NZES in the limit of strong impurity scatterings in the normal metal at the zero temperature. The integer NZES represents the number of perfect transmission channels through the junction. An analysis of the chiral symmetry of the Hamiltonian indicates that NZES corresponds to the Atiyah-Singer index in mathematics.
Decentralized Pricing in Minimum Cost Spanning Trees
Hougaard, Jens Leth; Moulin, Hervé; Østerdal, Lars Peter
In the minimum cost spanning tree model we consider decentralized pricing rules, i.e. rules that cover at least the ecient cost while the price charged to each user only depends upon his own connection costs. We de ne a canonical pricing rule and provide two axiomatic characterizations. First......, the canonical pricing rule is the smallest among those that improve upon the Stand Alone bound, and are either superadditive or piece-wise linear in connection costs. Our second, direct characterization relies on two simple properties highlighting the special role of the source cost....
Iterative Regularization with Minimum-Residual Methods
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success...... as regularization methods is highly problem dependent....
Quantum Monte Carlo for minimum energy structures
Wagner, Lucas K
2010-01-01
We present an efficient method to find minimum energy structures using energy estimates from accurate quantum Monte Carlo calculations. This method involves a stochastic process formed from the stochastic energy estimates from Monte Carlo that can be averaged to find precise structural minima while using inexpensive calculations with moderate statistical uncertainty. We demonstrate the applicability of the algorithm by minimizing the energy of the H2O-OH- complex and showing that the structural minima from quantum Monte Carlo calculations affect the qualitative behavior of the potential energy surface substantially.
First minimum bias physics results at LHCb
Dettori, Francesco
2010-01-01
We report on the first measurements of the LHCb experiment, as obtained from $pp$ collisions at $\\sqrt{s}$ = 0.9 TeV and 7 TeV recorded using a minimum bias trigger. In particular measurements of the absolute $K_S^0$ production cross section at $\\sqrt{s}$ = 0.9 TeV and of the $\\overline{\\Lambda/}\\Lambda$ ratio both at $\\sqrt{s}$ = 0.9 TeV and 7 TeV are discussed and preliminary results are presented
Iterative regularization with minimum-residual methods
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success...... as regularization methods is highly problem dependent....
Aerobrake assembly with minimum Space Station accommodation
Katzberg, Steven J.; Butler, David H.; Doggett, William R.; Russell, James W.; Hurban, Theresa
1991-01-01
The minimum Space Station Freedom accommodations required for initial assembly, repair, and refurbishment of the Lunar aerobrake were investigated. Baseline Space Station Freedom support services were assumed, as well as reasonable earth-to-orbit possibilities. A set of three aerobrake configurations representative of the major themes in aerobraking were developed. Structural assembly concepts, along with on-orbit assembly and refurbishment scenarios were created. The scenarios were exercised to identify required Space Station Freedom accommodations. Finally, important areas for follow-on study were also identified.
Minimum Reservoir Water Level in Hydropower Dams
Sarkardeh, Hamed
2017-07-01
Vortex formation over the intakes is an undesirable phenomenon within the water withdrawal process from a dam reservoir. Calculating the minimum operating water level in power intakes by empirical equations is not a safe way and sometimes contains some errors. Therefore, current method to calculate the critical submergence of a power intake is construction of a scaled physical model in parallel with numerical model. In this research some proposed empirical relations for prediction of submergence depth in power intakes were validated with experimental data of different physical and numerical models of power intakes. Results showed that, equations which involved the geometry of intake have better correspondence with the experimental and numerical data.
Iterative Regularization with Minimum-Residual Methods
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success...
The Risk Management of Minimum Return Guarantees
Antje Mahayni
2008-05-01
Full Text Available Contracts paying a guaranteed minimum rate of return and a fraction of a positive excess rate, which is specified relative to a benchmark portfolio, are closely related to unit-linked life-insurance products and can be considered as alternatives to direct investment in the underlying benchmark. They contain an embedded power option, and the key issue is the tractable and realistic hedging of this option, in order to rigorously justify valuation by arbitrage arguments and prevent the guarantees from becoming uncontrollable liabilities to the issuer. We show how to determine the contract parameters conservatively and implement robust risk-management strategies.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Implementation Of The Local Minimum Wage In Malang City (A Case Study in Malang City 2014
Dhea Candra Dewi Candra Dewi
2015-04-01
Full Text Available Wage system in a framework of how wages set and defined in order to improve the welfare of worker. The Indonesian government attempt to set a minimum wage in accordance with the eligibility standard of living. The study intend to analize the policy of Local Minimum Wage in Malang City in 2014, its implementation and constraining factors of those Local Minimum Wages. The research uses interactive model analysis as introduced by Miles and Hubermann [6] that consist of data collection, data reduction, data display, and conclusion. Constraining factors seen at the respond given by relevant actors to the policy such as employer organizations, worker unions, wage councils, and local government. Firstly, company as employer organization does not use wage scale system as suggested by the policy. Secondly, lack of communication forum between company and worker union sounds very high. Thirdly, inability of small and big companies to pay minimum standard wages. Lastly, disagreement and different opinion about wage scale applied between local wage council, employer organization and workers union that often occurs in tripartite communication forum. Keywords: Employers Organization, Local Minimum Wage, Local Wage Council, Policy Implementation, Tripartite Communication forum, Workers Union.
Optimal Succinctness for Range Minimum Queries
Fischer, Johannes
2008-01-01
For an array A of n objects from a totally ordered universe, a range minimum query (RMQ) asks for the position of the minimum element in the sub-array A[i,j]. We focus on the setting where the array $A$ is static and known in advance, and can hence be preprocessed into a scheme in order to answer future queries faster. We make the further assumption that the input array A cannot be used at query time. Under this assumption, a natural lower bound of 2n bits for RMQ-schemes exists. We give the first truly succinct preprocessing scheme for O(1)-RMQs. Its final space consumption is 2n+o(n) bits, thus being asymptotically optimal. We also give a simple linear-time construction algorithm for this scheme that needs only n+o(n) bits of space in addition to the 2n+o(n) bits needed for the final data structure, thereby lowering the peak space consumption of previous schemes from O(n\\log n) to O(n) bits. We also improve on LCA-computation in BPS- and DFUDS-encoded trees.
Constrained length minimum inductance gradient coil design.
Chronik, B A; Rutt, B K
1998-02-01
A gradient coil design algorithm capable of controlling the position of the homogeneous region of interest (ROI) with respect to the current-carrying wires is required for many advanced imaging and spectroscopy applications. A modified minimum inductance target field method that allows the placement of a set of constraints on the final current density is presented. This constrained current minimum inductance method is derived in the context of previous target field methods. Complete details are shown and all equations required for implementation of the algorithm are given. The method has been implemented on computer and applied to the design of both a 1:1 aspect ratio (length:diameter) central ROI and a 2:1 aspect ratio edge ROI gradient coil. The 1:1 design demonstrates that a general analytic method can be used to easily obtain very short gradient coil designs for use with specialized magnet systems. The edge gradient design demonstrates that designs that allow imaging of the neck region with a head sized gradient coil can be obtained, as well as other applications requiring edge-of-cylinder regions of uniformity.
Minimum Cost Homomorphisms to Reflexive Digraphs
Gupta, Arvind; Karimi, Mehdi; Rafiey, Arash
2007-01-01
For digraphs $G$ and $H$, a homomorphism of $G$ to $H$ is a mapping $f:\\ V(G)\\dom V(H)$ such that $uv\\in A(G)$ implies $f(u)f(v)\\in A(H)$. If moreover each vertex $u \\in V(G)$ is associated with costs $c_i(u), i \\in V(H)$, then the cost of a homomorphism $f$ is $\\sum_{u\\in V(G)}c_{f(u)}(u)$. For each fixed digraph $H$, the {\\em minimum cost homomorphism problem} for $H$, denoted MinHOM($H$), is the following problem. Given an input digraph $G$, together with costs $c_i(u)$, $u\\in V(G)$, $i\\in V(H)$, and an integer $k$, decide if $G$ admits a homomorphism to $H$ of cost not exceeding $k$. We focus on the minimum cost homomorphism problem for {\\em reflexive} digraphs $H$ (every vertex of $H$ has a loop). It is known that the problem MinHOM($H$) is polynomial time solvable if the digraph $H$ has a {\\em Min-Max ordering}, i.e., if its vertices can be linearly ordered by $<$ so that $i
The minimum jet power and equipartition
Zdziarski, Andrzej A
2014-01-01
We derive the minimum power of jets and their magnetic field strength based on their observed non-thermal synchrotron emission. The correct form of this method takes into account both the internal energy in the jet and the ion rest-mass energy associated with the bulk motion. The latter was neglected in a number of papers, which instead adopted the well-known energy-content minimization method. That method was developed for static sources, for which there is no bulk-motion component of the energy. In the case of electron power-law spectra with index >2 in ion-electron jets, the rest-mass component dominates. The minimization method for the jet power taking it into account was considered in some other work, but only based on either an assumption of a constant total synchrotron flux or a fixed range of the Lorentz factors. Instead, we base our method on an observed optically-thin synchrotron spectrum. We find the minimum jet power is independent of its radius when the rest-mass power dominates, which becomes th...
The Maunder minimum (1645--1715) was indeed a Grand minimum: A reassessment of multiple datasets
Usoskin, Ilya G; Asvestari, Eleanna; Hawkins, Ed; Käpylä, Maarit; Kovaltsov, Gennady A; Krivova, Natalie; Lockwood, Michael; Mursula, Kalevi; O'Reilly, Jezebel; Owens, Matthew; Scott, Chris J; Sokoloff, Dmitry D; Solanki, Sami K; Soon, Willie; Vaquero, José M
2015-01-01
Aims: Although the time of the Maunder minimum (1645--1715) is widely known as a period of extremely low solar activity, claims are still debated that solar activity during that period might still have been moderate, even higher than the current solar cycle #24. We have revisited all the existing pieces of evidence and datasets, both direct and indirect, to assess the level of solar activity during the Maunder minimum. Methods: We discuss the East Asian naked-eye sunspot observations, the telescopic solar observations, the fraction of sunspot active days, the latitudinal extent of sunspot positions, auroral sightings at high latitudes, cosmogenic radionuclide data as well as solar eclipse observations for that period. We also consider peculiar features of the Sun (very strong hemispheric asymmetry of sunspot location, unusual differential rotation and the lack of the K-corona) that imply a special mode of solar activity during the Maunder minimum. Results: The level of solar activity during the Maunder minimu...
Minimum Energy Requirements in Complex Distillation Arrangements
Halvorsen, Ivar J.
2001-07-01
Distillation is the most widely used industrial separation technology and distillation units are responsible for a significant part of the total heat consumption in the world's process industry. In this work we focus on directly (fully thermally) coupled column arrangements for separation of multicomponent mixtures. These systems are also denoted Petlyuk arrangements, where a particular implementation is the dividing wall column. Energy savings in the range of 20-40% have been reported with ternary feed mixtures. In addition to energy savings, such integrated units have also a potential for reduced capital cost, making them extra attractive. However, the industrial use has been limited, and difficulties in design and control have been reported as the main reasons. Minimum energy results have only been available for ternary feed mixtures and sharp product splits. This motivates further research in this area, and this thesis will hopefully give some contributions to better understanding of complex column systems. In the first part we derive the general analytic solution for minimum energy consumption in directly coupled columns for a multicomponent feed and any number of products. To our knowledge, this is a new contribution in the field. The basic assumptions are constant relative volatility, constant pressure and constant molar flows and the derivation is based on Underwood's classical methods. An important conclusion is that the minimum energy consumption in a complex directly integrated multi-product arrangement is the same as for the most difficult split between any pair of the specified products when we consider the performance of a conventional two-product column. We also present the Vmin-diagram, which is a simple graphical tool for visualisation of minimum energy related to feed distribution. The Vmin-diagram provides a simple mean to assess the detailed flow requirements for all parts of a complex directly coupled arrangement. The main purpose in
Minimum dimension of an ITER like Tokamak with a given Q
Johner, J
2004-07-01
The minimum dimension of an ITER like tokamak with a given amplification factor Q is calculated for two values of the maximum magnetic field in the superconducting toroidal field coils. For ITERH-98P(y,2) scaling of the energy confinement time, it is shown that for a sufficiently large tokamak, the maximum Q is obtained for the operating point situated both at the maximum density and at the minimum margin with respect to the H-L transition. We have shown that increasing the maximum magnetic field in the toroidal field coils from the present 11.8 T to 16 T would result in a strong reduction of the machine size but has practically no effect on the fusion power. Values obtained for {beta}{sub N} are found to be below 2. Peak fluxes on the divertor plates with an ITER like divertor and a multi-machine expression for the power radiated in the plasma mantle, are below 10 MW/m{sup 2}.
Minimum Redundancy Coding for Uncertain Sources
Baer, Michael B; Charalambous, Charalambos D
2011-01-01
Consider the set of source distributions within a fixed maximum relative entropy with respect to a given nominal distribution. Lossless source coding over this relative entropy ball can be approached in more than one way. A problem previously considered is finding a minimax average length source code. The minimizing players are the codeword lengths --- real numbers for arithmetic codes, integers for prefix codes --- while the maximizing players are the uncertain source distributions. Another traditional minimizing objective is the first one considered here, maximum (average) redundancy. This problem reduces to an extension of an exponential Huffman objective treated in the literature but heretofore without direct practical application. In addition to these, this paper examines the related problem of maximal minimax pointwise redundancy and the problem considered by Gawrychowski and Gagie, which, for a sufficiently small relative entropy ball, is equivalent to minimax redundancy. One can consider both Shannon-...
FastTree 2--approximately maximum-likelihood trees for large alignments.
Morgan N Price
Full Text Available BACKGROUND: We recently described FastTree, a tool for inferring phylogenies for alignments with up to hundreds of thousands of sequences. Here, we describe improvements to FastTree that improve its accuracy without sacrificing scalability. METHODOLOGY/PRINCIPAL FINDINGS: Where FastTree 1 used nearest-neighbor interchanges (NNIs and the minimum-evolution criterion to improve the tree, FastTree 2 adds minimum-evolution subtree-pruning-regrafting (SPRs and maximum-likelihood NNIs. FastTree 2 uses heuristics to restrict the search for better trees and estimates a rate of evolution for each site (the "CAT" approximation. Nevertheless, for both simulated and genuine alignments, FastTree 2 is slightly more accurate than a standard implementation of maximum-likelihood NNIs (PhyML 3 with default settings. Although FastTree 2 is not quite as accurate as methods that use maximum-likelihood SPRs, most of the splits that disagree are poorly supported, and for large alignments, FastTree 2 is 100-1,000 times faster. FastTree 2 inferred a topology and likelihood-based local support values for 237,882 distinct 16S ribosomal RNAs on a desktop computer in 22 hours and 5.8 gigabytes of memory. CONCLUSIONS/SIGNIFICANCE: FastTree 2 allows the inference of maximum-likelihood phylogenies for huge alignments. FastTree 2 is freely available at http://www.microbesonline.org/fasttree.
Jamil, T.; Braak, ter C.J.F.
2012-01-01
Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the number of objects (N). One of the possible solution is the Relevance vector machine (RVM) which is a form of automatic relevance detection and has gained popularity in the pattern recognition machine l
Minimum wage impacts on youth employment transitions, 1993-1999
Michele Campolieti; Tony Fang; Morley Gunderson
2005-01-01
The longitudinal nature of the Master File of the Survey of Labour and Income Dynamics (SLID) for the period 1993-9, enables comparing transitions from employment to non-employment for individuals affected by minimum wage changes with appropriate comparison groups not affected by minimum wages. This is based on the large number (24) of minimum wage changes that have occurred across the different provincial jurisdictions in Canada over the 1990s. The results indicate that the minimum wage incr...
Minimum-time trajectory planning based on the shortest path for the wheeled mobile robot
LIN Feng-yun; LV Tian-sheng
2006-01-01
The time-optimal trajectory planning is proposed under kinematic and dynamic constraints for a 2-DOF wheeled robot. In order to make full use of the motor' s capacity, we calculate the maximum torque and the minimum torque by considering the maximum heat-converted power generated by the DC motor. The shortest path is planned by using the geometric method under kinematic constraints. Under the bound torques, the velocity limits and the maximum acceleration (deceleration) are obtained by combining with the dynamics. We utilize the phase-plane analysis technique to generate the time optimal trajectory based on the shortest path. At last, the computer simulations for our laboratory mobile robot were performed. The simulation results prove the proposed method is simple and effective for practical use.
Galactic Archaeology and Minimum Spanning Trees
Macfarlane, B A; Flynn, C M L
2015-01-01
Chemical tagging of stellar debris from disrupted open clusters and associations underpins the science cases for next-generation multi-object spectroscopic surveys. As part of the Galactic Archaeology project TraCD (Tracking Cluster Debris), a preliminary attempt at reconstructing the birth clouds of now phase-mixed thin disk debris is undertaken using a parametric minimum spanning tree (MST) approach. Empirically-motivated chemical abundance pattern uncertainties (for a 10-dimensional chemistry-space) are applied to NBODY6-realised stellar associations dissolved into a background sea of field stars, all evolving in a Milky Way potential. We demonstrate that significant population reconstruction degeneracies appear when the abundance uncertainties approach 0.1 dex and the parameterised MST approach is employed; more sophisticated methodologies will be required to ameliorate these degeneracies.
Attosecond pulse shaping around a Cooper minimum
Schoun, S B; Wheeler, J; Roedig, C; Agostini, P; DiMauro, L F; Schafer, K J; Gaarde, M B
2013-01-01
High harmonic generation (HHG) is used to measure the spectral phase of the recombination dipole matrix element (RDM) in argon over a broad frequency range that includes the 3p Cooper minimum (CM). The measured RDM phase agrees well with predictions based on the scattering phases and amplitudes of the interfering s- and d-channel contributions to the complementary photoionization process. The reconstructed attosecond bursts that underlie the HHG process show that the derivative of the RDM spectral phase, the group delay, does not have a straight-forward interpretation as an emission time, in contrast to the usual attochirp group delay. Instead, the rapid RDM phase variation caused by the CM reshapes the attosecond bursts.
Aligning Sequences by Minimum Description Length
John S. Conery
2008-01-01
Full Text Available This paper presents a new information theoretic framework for aligning sequences in bioinformatics. A transmitter compresses a set of sequences by constructing a regular expression that describes the regions of similarity in the sequences. To retrieve the original set of sequences, a receiver generates all strings that match the expression. An alignment algorithm uses minimum description length to encode and explore alternative expressions; the expression with the shortest encoding provides the best overall alignment. When two substrings contain letters that are similar according to a substitution matrix, a code length function based on conditional probabilities defined by the matrix will encode the substrings with fewer bits. In one experiment, alignments produced with this new method were found to be comparable to alignments from CLUSTALW. A second experiment measured the accuracy of the new method on pairwise alignments of sequences from the BAliBASE alignment benchmark.
Radiation belt dynamics during solar minimum
Gussenhoven, M.S.; Mullen, E.G. (Geophysics Lab., Air Force Systems Command, Hanscom AFB, MA (US)); Holeman, E. (Physics Dept., Boston College, Chestnut Hill, MA (US))
1989-12-01
Two types of temporal variation in the radiation belts are studied using low altitude data taken onboard the DMSP F7 satellite: those associated with the solar cycle and those associated with large magnetic storm effects. Over a three-year period from 1984 to 1987 and encompassing solar minimum, the protons in the heart of the inner belt increased at a rate of approximately 6% per year. Over the same period, outer zone electron enhancements declined both in number and peak intensity. During the large magnetic storm of February 1986, following the period of peak ring current intensity, a second proton belt with energies up to 50 MeV was found at magnetic latitudes between 45{degrees} and 55{degrees}. The belt lasted for more than 100 days. The slot region between the inner and outer electron belts collapsed by the merging of the two populations and did not reform for 40 days.
R. van Mastrigt (Ron)
1990-01-01
textabstractThe contractility of the urinary bladder can be adequately described in terms of the parameters P0 (isometric pressure) and Vmax (maximum contraction velocity). In about 12% of urodynamic evaluations of patients these clinically relevant parameters can be calculated from pressure and flo
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Do citizens have minimum medical knowledge? A survey
Steurer-Stey Claudia
2007-05-01
Full Text Available Abstract Background Experts defined a "minimum medical knowledge" (MMK that people need for understanding typical signs and/or risk factors of four relevant clinical conditions: myocardial infarction, stroke, chronic obstructive pulmonary disease and HIV/AIDS. We tested to what degree Swiss adult citizens satisfy this criterion for MMK and whether people with medical experience have acquired better knowledge than those without. Methods Questionnaire interview in a Swiss urban area with 185 Swiss citizens (median age 29 years, interquartile range 23 to 49, 52% male. We obtained context information on age, gender, highest educational level, (paramedical background and specific health experience with one of the conditions in the social surrounding. We calculated the proportion of MMK and examined whether citizens with medical background (personal or professional would perform better compared to other groups. Results No single citizen reached the full MMK (100%. The mean MMK was as low as 32% and the range was 0 -72%. Surprisingly, multivariable analysis showed that participants with a university degree (n = 84; β (95% CI +3.7% MMK (0.4–7.1 p = 0.03, (paramedical background (n = 34; +6.2% MMK (2.0–10.4, p = 0.004 and personal illness experience (n = 96; +4.9% MMK (1.5–8.2, p = 0.004 had only a moderately higher MMK than those without, while age and sex had no effect on the level of MMK. Interaction between university degree and clinical experience (personal or professional showed no effect suggesting that higher education lacks synergistic effect. Conclusion This sample of Swiss citizens did not know more than a third of the MMK. We found little difference within groups with medical experience (personal or professional, suggesting that there is a consistent and dramatic lack of knowledge in the general public about the typical signs and risk factors of relevant clinical conditions.
30 CFR 75.1431 - Minimum rope strength.
2010-07-01
... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips Wire Ropes § 75.1431 Minimum rope... used for hoisting shall meet the minimum rope strength values obtained by the following formulas in...) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For...
30 CFR 77.1431 - Minimum rope strength.
2010-07-01
... Hoisting Wire Ropes § 77.1431 Minimum rope strength. At installation, the nominal strength (manufacturer's published catalog strength) of wire ropes used for hoisting shall meet the minimum rope strength values...=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load...
30 CFR 57.19021 - Minimum rope strength.
2010-07-01
... Hoisting Wire Ropes § 57.19021 Minimum rope strength. At installation, the nominal strength (manufacturer's published catalog strength) of wire ropes used for hoisting shall meet the minimum rope strength values...=Static Load×4.0. (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static...
Minimum Wages and Skill Acquisition: Another Look at Schooling Effects.
Neumark, David; Wascher, William
2003-01-01
Examines the effects of minimum wage on schooling, seeking to reconcile some of the contradictory results in recent research using Current Population Survey data from the late 1970s through the 1980s. Findings point to negative effects of minimum wages on school enrollment, bolstering the findings of negative effects of minimum wages on enrollment…
Employment Effects of Minimum and Subminimum Wages. Recent Evidence.
Neumark, David
Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…
29 CFR 783.43 - Computation of seaman's minimum wage.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Computation of seaman's minimum wage. 783.43 Section 783.43...'s minimum wage. Section 6(b) requires, under paragraph (2) of the subsection, that an employee...'s minimum wage requirements by reason of the 1961 Amendments (see §§ 783.23 and 783.26). Although...
41 CFR 50-201.1101 - Minimum wages.
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wages. 50-201... Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 201-GENERAL REGULATIONS § 50-201.1101 Minimum wages. Determinations of prevailing minimum wages or changes therein will be published in the Federal Register by the...
29 CFR 4.159 - General minimum wage.
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true General minimum wage. 4.159 Section 4.159 Labor Office of... General minimum wage. The Act, in section 2(b)(1), provides generally that no contractor or subcontractor... a contract less than the minimum wage specified under section 6(a)(1) of the Fair Labor Standards...
Minimum Wage Effects on Educational Enrollments in New Zealand
Pacheco, Gail A.; Cruickshank, Amy A.
2007-01-01
This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…
14 CFR 23.1513 - Minimum control speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Minimum control speed. 23.1513 Section 23.1513 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... Information § 23.1513 Minimum control speed. The minimum control speed V MC, determined under § 23.149,...
14 CFR 25.1513 - Minimum control speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Minimum control speed. 25.1513 Section 25.1513 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... Limitations § 25.1513 Minimum control speed. The minimum control speed V MC determined under § 25.149 must...
14 CFR 29.49 - Performance at minimum operating speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Performance at minimum operating speed. 29... minimum operating speed. (a) For each Category A helicopter, the hovering performance must be determined... than helicopters, the steady rate of climb at the minimum operating speed must be determined over...
14 CFR 27.49 - Performance at minimum operating speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Performance at minimum operating speed. 27... minimum operating speed. (a) For helicopters— (1) The hovering ceiling must be determined over the ranges... climb at the minimum operating speed must be determined over the ranges of weight, altitude,...
12 CFR 931.3 - Minimum investment in capital stock.
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum investment in capital stock. 931.3... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL STOCK § 931.3 Minimum investment in capital stock. (a) A Bank shall require each member to maintain a minimum investment in the capital stock of the Bank,...
24 CFR 891.145 - Owner deposit (Minimum Capital Investment).
2010-04-01
... General Program Requirements § 891.145 Owner deposit (Minimum Capital Investment). As a Minimum Capital... Investment shall be one-half of one percent (0.5%) of the HUD-approved capital advance, not to exceed $25,000. ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Owner deposit (Minimum...
12 CFR 263.82 - Establishment of minimum capital levels.
2010-01-01
... Maintain Adequate Capital § 263.82 Establishment of minimum capital levels. The Board has established minimum capital levels for state member banks and bank holding companies in its Capital Adequacy... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Establishment of minimum capital levels....
The generalized minimum spanning tree polytope and related polytopes
Pop, P.C.
2001-01-01
The Generalized Minimum Spanning Tree problem denoted by GMST is a variant of the classical Minimum Spanning Tree problem in which nodes are partitioned into clusters and the problem calls for a minimum cost tree spanning at least one node from each cluster. A different version of the problem, calle
29 CFR 505.3 - Prevailing minimum compensation.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Prevailing minimum compensation. 505.3 Section 505.3 Labor... HUMANITIES § 505.3 Prevailing minimum compensation. (a)(1) In the absence of an alternative determination...)(2) of this section, the prevailing minimum compensation required to be paid under the Act to...
7 CFR 953.43 - Minimum standards of quality.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of...
27 CFR 40.256 - Minimum manufacturing and activity requirements.
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 2 2010-04-01 2010-04-01 false Minimum manufacturing and... Provisions Relating to Operations § 40.256 Minimum manufacturing and activity requirements. The minimum manufacturing and activity requirement prescribed in § 40.61(b) of this part is a continuing condition of...
The generalized minimum spanning tree polytope and related polytopes
Pop, P.C.
2001-01-01
The Generalized Minimum Spanning Tree problem denoted by GMST is a variant of the classical Minimum Spanning Tree problem in which nodes are partitioned into clusters and the problem calls for a minimum cost tree spanning at least one node from each cluster. A different version of the problem, calle
A Novel Gene Selection Method Based on Sparse Representation and Max-Relevance and Min-Redundancy.
Chen, Min; He, Xiaoming; Duan, ShaoBin; Deng, YingWei
2017-01-01
Gene selection method as an important data preprocessing work has been followed. The criteria Maximum relevance and minimum redundancy (MRMR) has been commonly used for gene selection, which has a satisfactory performance in evaluating the correlation between two genes. However, for viewing genes in isolation, it ignores the influence of other genes. In this study, we propose a new method based on sparse representation and MRMR algorithm (SRCMRM), using the sparse representation coefficient to represent the relevance of genes and correlation between genes and categories. The SRCMRMR algorithm contains two steps. Firstly, the genes irrelevant to the classification target are removed by using sparse representation coefficient. Secondly, sparse representation coefficient is used to calculate the correlation between genes and the most representative gene with the highest evaluation. To validate the performance of the SRCMRM, our method is compared with various algorithms. The proposed method achieves better classification accuracy for all datasets. The effectiveness and stability of our method have been proven through various experiments, which means that our method has practical significance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Lexicography and the Relevance Criterion
This will be done within the framework of the function theory .... work and made accessible through a rhyming system for the characters as well as a complex .... lexicographically relevant user characteristics — an open list to which new charac-.
Relevance theory: pragmatics and cognition.
Wearing, Catherine J
2015-01-01
Relevance Theory is a cognitively oriented theory of pragmatics, i.e., a theory of language use. It builds on the seminal work of H.P. Grice(1) to develop a pragmatic theory which is at once philosophically sensitive and empirically plausible (in both psychological and evolutionary terms). This entry reviews the central commitments and chief contributions of Relevance Theory, including its Gricean commitment to the centrality of intention-reading and inference in communication; the cognitively grounded notion of relevance which provides the mechanism for explaining pragmatic interpretation as an intention-driven, inferential process; and several key applications of the theory (lexical pragmatics, metaphor and irony, procedural meaning). Relevance Theory is an important contribution to our understanding of the pragmatics of communication.
Relevance theory and pragmatic impairment.
Leinonen, E; Kerbel, D
1999-01-01
This paper summarizes aspects of relevance theory that are useful for exploring impairment of pragmatic comprehension in children. It explores data from three children with pragmatic language difficulties within this framework. Relevance theory is seen to provide a means of explaining why, in a given context, a particular utterance is problematic. It thus enables one to move on from mere description of problematic behaviours towards their explanation. The theory provides a clearer delineation between the explicit and the implicit, and hence between semantics and pragmatics. This enables one to place certain difficulties more firmly within semantics and others within pragmatics. Relevance, and its maximization in communication, are squarely placed within human cognition, which suggests a close connection between pragmatic and cognitive (dis)functioning. Relevance theory thus emerges as a powerful tool in the exploration and understanding of pragmatic language difficulties in children and offers therapeutically valuable insight into the nature of interactions involving individuals with such impairments.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.