A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Tutte sets in graphs II: The complexity of finding maximum Tutte sets
Bauer, D.; Broersma, Haitze J.; Kahl, N.; Morgana, A.; Schmeichel, E.; Surowiec, T.
2007-01-01
A well-known formula of Tutte and Berge expresses the size of a maximum matching in a graph $G$ in terms of what is usually called the deficiency. A subset $X$ of $V(G)$ for which this deficiency is attained is called a Tutte set of $G$. While much is known about maximum matchings, less is known
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
On the maximum grain size entrained by photoevaporative winds
Hutchison, Mark A; Maddison, Sarah T
2016-01-01
We model the behaviour of dust grains entrained by photoevaporation-driven winds from protoplanetary discs assuming a non-rotating, plane-parallel disc. We obtain an analytic expression for the maximum entrainable grain size in extreme-UV radiation-driven winds, which we demonstrate to be proportional to the mass loss rate of the disc. When compared with our hydrodynamic simulations, the model reproduces almost all of the wind properties for the gas and dust. In typical turbulent discs, the entrained grain sizes in the wind are smaller than the theoretical maximum everywhere but the inner disc due to dust settling.
A relationship between maximum packing of particles and particle size
Fedors, R. F.
1979-01-01
Experimental data indicate that the volume fraction of particles in a packed bed (i.e. maximum packing) depends on particle size. One explanation for this is based on the idea that particle adhesion is the primary factor. In this paper, however, it is shown that entrainment and immobilization of liquid by the particles can also account for the facts.
Samurai sword sets spindle size.
Reber, Simone; Hyman, Anthony A
2011-12-09
Although the parts list is nearly complete for many cellular structures, mechanisms that control their size remain poorly understood. Loughlin and colleagues now show that phosphorylation of a single residue of katanin, a microtubule-severing protein, largely accounts for the difference in spindle length between two closely related frogs.
Maximum Bipartite Matching Size And Application to Cuckoo Hashing
Kanizo, Yossi; Keslassy, Isaac
2010-01-01
Cuckoo hashing with a stash is a robust high-performance hashing scheme that can be used in many real-life applications. It complements cuckoo hashing by adding a small stash storing the elements that cannot fit into the main hash table due to collisions. However, the exact required size of the stash and the tradeoff between its size and the memory over-provisioning of the hash table are still unknown. We settle this question by investigating the equivalent maximum matching size of a random bipartite graph, with a constant left-side vertex degree $d=2$. Specifically, we provide an exact expression for the expected maximum matching size and show that its actual size is close to its mean, with high probability. This result relies on decomposing the bipartite graph into connected components, and then separately evaluating the distribution of the matching size in each of these components. In particular, we provide an exact expression for any finite bipartite graph size and also deduce asymptotic results as the nu...
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
Network Decomposition and Maximum Independent Set Part Ⅰ: Theoretic Basis
朱松年; 朱嫱
2003-01-01
The structure and characteristics of a connected network are analyzed, and a special kind of sub-network, which can optimize the iteration processes, is discovered. Then, the sufficient and necessary conditions for obtaining the maximum independent set are deduced. It is found that the neighborhood of this sub-network possesses the similar characters, but both can never be allowed incorporated together. Particularly, it is identified that the network can be divided into two parts by a certain style, and then both of them can be transformed into a pair sets network, where the special sub-networks and their neighborhoods appear alternately distributed throughout the entire pair sets network. By use of this characteristic, the network decomposed enough without losing any solutions is obtained. All of these above will be able to make well ready for developing a much better algorithm with polynomial time bound for an odd network in the the application research part of this subject.
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Francisco J. Tapiador
2011-01-01
Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.
BINDER DRAINAGE TEST FOR POROUS MIXTURES MADE BY VARYING THE MAXIMUM AGGREGATE SIZES
Hardiman Hardiman
2004-01-01
Full Text Available Binder drainage occurs with mixes of small aggregate surface area particularly porous asphalt. The binder drainage test, developed by the Transport Research Laboratory, UK, is commonly used to set an upper limit on the acceptable binder content for a porous mix. This paper presents the results of a laboratory investigation to determine the effects of different binder types on the binder drainage characteristics of porous mix made of various maximum aggregate sizes 20, 14 and 10 mm. Two types of binder were used, conventional 60/70 pen bitumen, and styrene butadiene styrene (SBS modified bitumen. The amount of binder lost through drainage after three hours at the maximum mixing temperature were measured in duplicate for mixes of different maximum sizes and binder contents. The maximum mixing temperature adopted depends on the types of binder used. The retained binder is plotted against the initial mixed binder content, together with the line of equality where the retained binder equals the mixed binder content. The results indicate the significant contribution of using SBS modified bitumen to increase the target bitumen binder content. Their significance is discussed in terms of target binder content, the critical binder content, the maximum mixed binder content and the maximum retained binder content values obtained from the binder drainage test. It was concluded that increasing maximum aggregate sizes decrease the maximum retained binder content, critical binder content, target binder content, maximum mixed binder content, and mixed content for both binders, but however for all mixtures, SBS is the highest.
Understanding the Role of Reservoir Size on Probable Maximum Precipitation
Woldemichael, A. T.; Hossain, F.
2011-12-01
This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the
Buffer Size Setting Method for DBR Scheduling
Park, Soonyoung; Woo, Kiyun; Fujimura, Shigeru
There are many kinds of delay in real-world production systems caused by many reasons including unexpected accidents. A delay of order may inflict great damages for not only itself but also the other affected orders. To prevent these types of loss from frequent delay, DBR (Drum-Buffer-Rope) scheduling method of TOC (Theory of Constraints) manages production schedule observing the state of time buffers. The current buffer size setting method for DBR scheduling is very simple and depends on user's experience. Although it makes possible to keep the due time for production orders, it leads to the redundant production lead time and stock. For DBR scheduling, it is not clear how the buffer size should be set. Therefore, this paper proposes a buffer size setting method for DBR scheduling providing a numerical model for the buffer size. In addition, a simulation gives the result of comparison between the current method and the proposed method, and the effect of the proposed method is shown.
Method to Determine Maximum Allowable Sinterable Silver Interconnect Size
Wereszczak, A. A.; Modugno, M. C.; Waters, S. B.; DeVoto, D. J.; Paret, P. P.
2016-05-01
The use of sintered-silver for large-area interconnection is attractive for some large-area bonding applications in power electronics such as the bonding of metal-clad, electrically-insulating substrates to heat sinks. Arrays of different pad sizes and pad shapes have been considered for such large area bonding; however, rather than arbitrarily choosing their size, it is desirable to use the largest size possible where the onset of interconnect delamination does not occur. If that is achieved, then sintered-silver's high thermal and electrical conductivities can be fully taken advantage of. Toward achieving this, a simple and inexpensive proof test is described to identify the largest achievable interconnect size with sinterable silver. The method's objective is to purposely initiate failure or delamination. Copper and invar (a ferrous-nickel alloy whose coefficient of thermal expansion (CTE) is similar to that of silicon or silicon carbide) disks were used in this study and sinterable silver was used to bond them. As a consequence of the method's execution, delamination occurred in some samples during cooling from the 250 degrees C sintering temperature to room temperature and bonding temperature and from thermal cycling in others. These occurrences and their interpretations highlight the method's utility, and the herein described results are used to speculate how sintered-silver bonding will work with other material combinations.
Fatigue Strength Prediction of Drilling Materials Based on the Maximum Non-metallic Inclusion Size
Zeng, Dezhi; Tian, Gang; Liu, Fei; Shi, Taihe; Zhang, Zhi; Hu, Junying; Liu, Wanying; Ouyang, Zhiying
2015-12-01
In this paper, the statistics of the size distribution of non-metallic inclusions in five drilling materials were performed. Based on the maximum non-metallic inclusion size, the fatigue strength of the drilling material was predicted. The sizes of non-metallic inclusions in drilling materials were observed to follow the inclusion size distribution rule. Then the maximum inclusion size in the fatigue specimens was deduced. According to the prediction equation of the maximum inclusion size and fatigue strength proposed by Murakami, fatigue strength of drilling materials was obtained. Moreover, fatigue strength was also measured through rotating bending tests. The predicted fatigue strength was significantly lower than the measured one. Therefore, according to the comparison results, the coefficients in the prediction equation were revised. The revised equation allowed the satisfactory prediction results of fatigue strength of drilling materials at the fatigue life of 107 rotations and could be used in the fast prediction of fatigue strength of drilling materials.
The maximum sizes of large scale structures in alternative theories of gravity
Bhattacharya, Sourav; Romano, Antonio Enea; Skordis, Constantinos; Tomaras, Theodore N
2016-01-01
The maximum size of a cosmic structure is given by the maximum turnaround radius -- the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulas for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulas agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the $\\Lambda$CDM value, by a factor $1 + \\frac{1}{3\\omega}$, where $\\omega\\gg 1$ is the Brans-Dicke parameter, implying consistency of the theory with current data.
Growth and maximum size of tiger sharks (Galeocerdo cuvier in Hawaii.
Carl G Meyer
Full Text Available Tiger sharks (Galecerdo cuvier are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL, with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W, in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km, after 366 days at liberty (DAL. We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured. We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Concept learning set-size functions for Clark's nutcrackers.
Wright, Anthony A; Magnotti, John F; Katz, Jeffrey S; Leonard, Kevin; Kelly, Debbie M
2016-01-01
Same/Different abstract-concept learning by Clark's nutcrackers (Nucifraga columbiana) was tested with novel stimuli following learning of training set expansion (8, 16, 32, 64, 128, 256, 512, and 1024 picture items). The resulting set-size function was compared to those from rhesus monkeys (Macaca mulatta), capuchin monkeys (Cebus apella), and pigeons (Columba livia). Nutcrackers showed partial concept learning following initial eight-item set learning, unlike the other species (Magnotti, Katz, Wright, & Kelly, 2015). The mean function for the nutcrackers' novel-stimulus transfer increased linearly as a function of the logarithm of training set size, which intersected its baseline function at the 128-item set size. Thus, nutcrackers on average achieved full concept learning (i.e., transfer statistically equivalent to baseline performance) somewhere between set sizes of 64 to 128 items, similar to full concept learning by monkeys. Pigeons required a somewhat larger training set (256 items) for full concept learning, but results from other experiments (initial training and transfer with 32- and 64-item set sizes) suggested carryover effects with smaller set sizes may have artificially prolonged the pigeon's full concept learning. We find it remarkable that these diverse species with very different neural architectures can fully learn this same/different abstract concept, and (at least under some conditions) do so with roughly similar sets sizes (64-128 items) and numbers of training exemplars, despite initial concept learning advantages (nutcrackers), learning disadvantages (pigeons), or increasing baselines (monkeys).
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Takahashi, Jun; Takabe, Satoshi; Hukushima, Koji
2017-07-01
A recently proposed exact algorithm for the maximum independent set problem is analyzed. The typical running time is improved exponentially in some parameter regions compared to simple binary search. Furthermore, the algorithm overcomes the core transition point, where the conventional leaf removal algorithm fails, and works up to the replica symmetry breaking (RSB) transition point. This suggests that a leaf removal core itself is not enough for typical hardness in the random maximum independent set problem, providing further evidence for RSB being the obstacle for algorithms in general.
A fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation.
Li, Haisen S; Romeijn, H Edwin; Dempsey, James F
2006-09-01
We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near monoenergetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the
Estimating the Size and Timing of the Maximum Amplitude of Solar Cycle 24
Ke-Jun Li; Peng-Xin Gao; Tong-Wei Su
2005-01-01
A simple statistical method is used to estimate the size and timing of maximum amplitude of the next solar cycle (cycle 24). Presuming cycle 23 to be a short cycle (as is more likely), the minimum of cycle 24 should occur about December 2006 (±2 months) and the maximum, around March 2011 (±9 months),and the amplitude is 189.9 ± 15.5, if it is a fast riser, or about 136, if it is a slow riser. If we presume cycle 23 to be a long cycle (as is less likely), the minimum of cycle 24 should occur about June 2008 (±2 months) and the maximum, about February 2013 (±8 months) and the maximum will be about 137 or 80, according as the cycle is a fast riser or a slow riser.
Dependency of U.S. Hurricane Economic Loss on Maximum Wind Speed and Storm Size
Zhai, Alice R
2014-01-01
Many empirical hurricane economic loss models consider only wind speed and neglect storm size. These models may be inadequate in accurately predicting the losses of super-sized storms, such as Hurricane Sandy in 2012. In this study, we examined the dependencies of normalized U.S. hurricane loss on both wind speed and storm size for 73 tropical cyclones that made landfall in the U.S. from 1988 to 2012. A multi-variate least squares regression is used to construct a hurricane loss model using both wind speed and size as predictors. Using maximum wind speed and size together captures more variance of losses than using wind speed or size alone. It is found that normalized hurricane loss (L) approximately follows a power law relation with maximum wind speed (Vmax) and size (R). Assuming L=10^c Vmax^a R^b, c being a scaling factor, the coefficients, a and b, generally range between 4-12 and 2-4, respectively. Both a and b tend to increase with stronger wind speed. For large losses, a weighted regression model, with...
Applying rough sets in word segmentation disambiguation based on maximum entropy model
无
2006-01-01
To solve the complicated feature extraction and long distance dependency problem in Word Segmentation Disambiguation ( WSD), this paper proposes to apply rough sets in WSD based on the Maximum Entropy model. Firstly, rough set theory is applied to extract the complicated features and long distance features, even from noise or inconsistent corpus. Secondly, these features are added into the Maximum Entropy model, and consequently, the feature weights can be assigned according to the performance of the whole disambiguation model. Finally, the semantic lexicon is adopted to build class-based rough set features to overcome data sparseness. The experiment indicated that our method performed better than previous models, which got top rank in WSD in 863 Evaluation in 2003. This system ranked first and second respectively in MSR and PKU open test in the Second International Chinese Word Segmentation Bakeoff held in 2005.
Adam Hartstone-Rose
2011-01-01
Full Text Available In a recent study, we quantified the scaling of ingested food size (Vb—the maximum size at which an animal consistently ingests food whole—and found that Vb scaled isometrically between species of captive strepsirrhines. The current study examines the relationship between Vb and body size within species with a focus on the frugivorous Varecia rubra and the folivorous Propithecus coquereli. We found no overlap in Vb between the species (all V. rubra ingested larger pieces of food relative to those eaten by P. coquereli, and least-squares regression of Vb and three different measures of body mass showed no scaling relationship within each species. We believe that this lack of relationship results from the relatively narrow intraspecific body size variation and seemingly patternless individual variation in Vb within species and take this study as further evidence that general scaling questions are best examined interspecifically rather than intraspecifically.
Setting the renormalization scale in QCD: The principle of maximum conformality
Brodsky, S. J.; Di Giustino, L.
2012-01-01
the renormalization scale is set properly, all nonconformal beta not equal 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with beta...... = 0. The resulting scale-fixed predictions using the principle of maximum conformality (PMC) are independent of the choice of renormalization scheme-a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale setting in the Abelian limit...
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
Škarabot, Jakob; Vigotsky, Andrew D.; Brown, Amanda Fernandes; Gomes, Thiago Matassoli; Novaes, Jefferson da Silva
2017-01-01
Background Foam rollers, or other similar devices, are a method for acutely increasing range of motion, but in contrast to static stretching, do not appear to have detrimental effects on neuromuscular performance. Purpose The purpose of this study was to investigate the effects of different volumes (60 and 120 seconds) of foam rolling of the hamstrings during the inter‐set rest period on repetition performance of the knee extension exercise. Methods Twenty‐five recreationally active females were recruited for the study (27.8 ± 3.6 years, 168.4 ± 7.2 cm, 69.1 ± 10.2 kg, 27.2 ± 2.1 m2/kg). Initially, subjects underwent a ten‐repetition maximum testing and retesting, respectively. Thereafter, the experiment involved three sets of knee extensions with a pre‐determined 10 RM load to concentric failure with the goal of completing the maximum number of repetitions. During the inter‐set rest period, either passive rest or foam rolling of different durations (60 and 120 seconds) in a randomized order was employed. Results Ninety‐five percent confidence intervals revealed dose‐dependent, detrimental effects, with more time spent foam rolling resulting in fewer repetitions (Cohen's d of 2.0 and 1.2 for 120 and 60 seconds, respectively, in comparison with passive rest). Conclusion The results of the present study suggest that more inter‐set foam rolling applied to the antagonist muscle group is detrimental to the ability to continually produce force. The finding that inter‐set foam rolling of the antagonist muscle group decreases maximum repetition performance has implications for foam rolling prescription and implementation, in both rehabilitation and athletic populations. Level of evidence 2b PMID:28217418
Monteiro, Estêvão Rios; Škarabot, Jakob; Vigotsky, Andrew D; Brown, Amanda Fernandes; Gomes, Thiago Matassoli; Novaes, Jefferson da Silva
2017-02-01
Foam rollers, or other similar devices, are a method for acutely increasing range of motion, but in contrast to static stretching, do not appear to have detrimental effects on neuromuscular performance. The purpose of this study was to investigate the effects of different volumes (60 and 120 seconds) of foam rolling of the hamstrings during the inter-set rest period on repetition performance of the knee extension exercise. Twenty-five recreationally active females were recruited for the study (27.8 ± 3.6 years, 168.4 ± 7.2 cm, 69.1 ± 10.2 kg, 27.2 ± 2.1 m(2)/kg). Initially, subjects underwent a ten-repetition maximum testing and retesting, respectively. Thereafter, the experiment involved three sets of knee extensions with a pre-determined 10 RM load to concentric failure with the goal of completing the maximum number of repetitions. During the inter-set rest period, either passive rest or foam rolling of different durations (60 and 120 seconds) in a randomized order was employed. Ninety-five percent confidence intervals revealed dose-dependent, detrimental effects, with more time spent foam rolling resulting in fewer repetitions (Cohen's d of 2.0 and 1.2 for 120 and 60 seconds, respectively, in comparison with passive rest). The results of the present study suggest that more inter-set foam rolling applied to the antagonist muscle group is detrimental to the ability to continually produce force. The finding that inter-set foam rolling of the antagonist muscle group decreases maximum repetition performance has implications for foam rolling prescription and implementation, in both rehabilitation and athletic populations. 2b.
An electromagnetism-like method for the maximum set splitting problem
Kratica Jozef
2013-01-01
Full Text Available In this paper, an electromagnetism-like approach (EM for solving the maximum set splitting problem (MSSP is applied. Hybrid approach consisting of the movement based on the attraction-repulsion mechanisms combined with the proposed scaling technique directs EM to promising search regions. Fast implementation of the local search procedure additionally improves the efficiency of overall EM system. The performance of the proposed EM approach is evaluated on two classes of instances from the literature: minimum hitting set and Steiner triple systems. The results show, except in one case, that EM reaches optimal solutions up to 500 elements and 50000 subsets on minimum hitting set instances. It also reaches all optimal/best-known solutions for Steiner triple systems.
Active impulsive noise control using maximum correntropy with adaptive kernel size
Lu, Lu; Zhao, Haiquan
2017-03-01
The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.
Effects of loading and size on maximum power output and gait characteristics in geckos.
Irschick, Duncan J; Vanhooydonck, Bieke; Herrel, Anthony; Andronescu, Anemone
2003-11-01
Stride length, stride frequency and power output are all factors influencing locomotor performance. Here, we first test whether mass-specific power output limits climbing performance in two species of geckos (Hemidactylus garnoti and Gekko gecko) by adding external loads to their bodies. We then test whether body size has a negative effect on mass-specific power output. Finally, we test whether loading affects kinematics in both gecko species. Lizards were induced to run vertically on a smooth wooden surface with loads of 0-200% of body mass (BM) in H. garnoti and 0-100% BM in G. gecko. For each stride, we calculated angular and linear kinematics (e.g. trunk angle, stride length), performance (maximum speed) and mean mass-specific power output per stride. The addition of increasingly large loads caused an initial increase in maximum mass-specific power output in both species, but for H. garnoti, mass-specific power output remained constant at higher loads (150% and 200% BM), even though maximum velocity declined. This result, in combination with the fact that stride frequency showed no evidence of leveling off as speed increased in either species, suggests that power limits maximum speed. In addition, the large gecko (G. gecko) produced significantly less power than the smaller H. garnoti, despite the fact that both species ran at similar speeds. This difference disappeared, however, when we recalculated power output based on higher maximum speeds for unloaded G. gecko moving vertically obtained by other researchers. Finally, the addition of external loads did not affect speed modulation in either species: both G. gecko and H. garnoti increase speed primarily by increasing stride frequency, regardless of loading condition. For a given speed, both species take shorter but more strides with heavier loads, but for a given load, G. gecko attains similar speeds to H. garnoti by taking longer but fewer strides.
Wenchao Cui
2013-01-01
Full Text Available This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP and Bayes’ rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
A Maximum Power Point Tracker with Automatic Step Size Tuning Scheme for Photovoltaic Systems
Kuei-Hsiang Chao
2012-01-01
Full Text Available The purpose of this paper is to study on a novel maximum power point tracking (MPPT method for photovoltaic (PV systems. First, the simulation environment for PV systems is constructed by using PSIM software package. A 516 W PV system established with Kyocera KC40T photovoltaic modules is used as an example to finish the simulation of the proposed MPPT method. When using incremental conductance (INC MPPT method, it usually should consider the tradeoff between the dynamic response and the steady-state oscillation, whereas the proposed modified incremental conductance method based on extension theory can automatically adjust the step size to track the maximum power point (MPP of PV array and effectively improve the dynamic response and steady-state performance of the PV systems, simultaneously. Some simulation and experimental results are made to verify that the proposed extension maximum power point tracking method can provide a good dynamic response and steady-state performance for a photovoltaic power generation system.
Movahednejad, E.; Ommi, F.; Hosseinalipour, S. M.; Chen, C. P.; Mahdavi, S. A.
2011-12-01
This paper describes the implementation of the instability analysis of wave growth on liquid jet surface, and maximum entropy principle (MEP) for prediction of droplet diameter distribution in primary breakup region. The early stage of the primary breakup, which contains the growth of wave on liquid-gas interface, is deterministic; whereas the droplet formation stage at the end of primary breakup is random and stochastic. The stage of droplet formation after the liquid bulk breakup can be modeled by statistical means based on the maximum entropy principle. The MEP provides a formulation that predicts the atomization process while satisfying constraint equations based on conservations of mass, momentum and energy. The deterministic aspect considers the instability of wave motion on jet surface before the liquid bulk breakup using the linear instability analysis, which provides information of the maximum growth rate and corresponding wavelength of instabilities in breakup zone. The two sub-models are coupled together using momentum source term and mean diameter of droplets. This model is also capable of considering drag force on droplets through gas-liquid interaction. The predicted results compared favorably with the experimentally measured droplet size distributions for hollow-cone sprays.
Set size and culture influence children's attention to number.
Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B
2015-03-01
Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual.
Scaling of wingbeat frequency with body mass in bats and limits to maximum bat size.
Norberg, Ulla M Lindhe; Norberg, R Åke
2012-03-01
The ability to fly opens up ecological opportunities but flight mechanics and muscle energetics impose constraints, one of which is that the maximum body size must be kept below a rather low limit. The muscle power available for flight increases in proportion to flight muscle mass and wingbeat frequency. The maximum wingbeat frequency attainable among increasingly large animals decreases faster than the minimum frequency required, so eventually they coincide, thereby defining the maximum body mass at which the available power just matches up to the power required for sustained aerobic flight. Here, we report new wingbeat frequency data for 27 morphologically diverse bat species representing nine families, and additional data from the literature for another 38 species, together spanning a range from 2.0 to 870 g. For these species, wingbeat frequency decreases with increasing body mass as M(b)(-0.26). We filmed 25 of our 27 species in free flight outdoors, and for these the wingbeat frequency varies as M(b)(-0.30). These exponents are strikingly similar to the body mass dependency M(b)(-0.27) among birds, but the wingbeat frequency is higher in birds than in bats for any given body mass. The downstroke muscle mass is also a larger proportion of the body mass in birds. We applied these empirically based scaling functions for wingbeat frequency in bats to biomechanical theories about how the power required for flight and the power available converge as animal size increases. To this end we estimated the muscle mass-specific power required for the largest flying extant bird (12-16 kg) and assumed that the largest potential bat would exert similar muscle mass-specific power. Given the observed scaling of wingbeat frequency and the proportion of the body mass that is made up by flight muscles in birds and bats, we estimated the maximum potential body mass for bats to be 1.1-2.3 kg. The largest bats, extinct or extant, weigh 1.6 kg. This is within the range expected if it
Bai, Xian-Zong; Ma, Chao-Wei; Chen, Lei; Tang, Guo-Jin
2016-09-01
When engaging in the maximum collision probability (Pcmax) analysis for short-term conjunctions between two orbiting objects, it is important to clarify and understand the assumptions for obtaining Pcmax. Based on Chan's analytical formulae and analysis of covariance ellipse's variation of orientation, shape, and size in the two-dimensional conjunction plane, this paper proposes a clear and comprehensive analysis of maximum collision probability when considering these variables. Eight situations will be considered when calculating Pcmax according to the varied orientation, shape, and size of the covariance ellipse. Three of the situations are not practical or meaningful; the remaining ones were completely or partially discussed in some of the previous works. These situations are discussed with uniform definitions and symbols and they are derived independently in this paper. The consequences are compared and validated by the results from previous works. Finally, a practical conjunction event is presented as a test case to demonstrate the effectiveness of methodology. Comparison of the Pcmax presented in this paper with the empirical results from the curve or surface calculated by numerical method indicates that the relative error of Pcmax is less than 0.0039%.
Effects of maximum aggregate size on UPV of brick aggregate concrete.
Mohammed, Tarek Uddin; Mahmood, Aziz Hasan
2016-07-01
Investigation was carried out to study the effects of maximum aggregate size (MAS) (12.5mm, 19.0mm, 25.0mm, 37.5mm, and 50.0mm) on ultrasonic pulse velocity (UPV) of concrete. For investigation, first class bricks were collected and broken to make coarse aggregate. The aggregates were tested for specific gravity, absorption capacity, unit weight, and abrasion resistance. Cylindrical concrete specimens were made with different sand to aggregate volume ratio (s/a) (0.40 and 0.45), W/C ratio (0.45, 0.50, and 0.55), and cement content (375kg/m(3) and 400kg/m(3)). The specimens were tested for compressive strength and Young's modulus. UPV through wet specimen was measured using Portable Ultrasonic Non-destructive Digital Indicating Tester (PUNDIT). Results indicate that the pulse velocity through concrete increases with an increase in MAS. Relationships between UPV and compressive strength; and UPV and Young's modulus of concrete are proposed for different maximum sizes of brick aggregate.
Seymour, Roger S
2010-09-01
Effect of size of inflorescences, flowers and cones on maximum rate of heat production is analysed allometrically in 23 species of thermogenic plants having diverse structures and ranging between 1.8 and 600 g. Total respiration rate (, micromol s(-1)) varies with spadix mass (M, g) according to in 15 species of Araceae. Thermal conductance (C, mW degrees C(-1)) for spadices scales according to C = 18.5M(0.73). Mass does not significantly affect the difference between floral and air temperature. Aroids with exposed appendices with high surface area have high thermal conductance, consistent with the need to vaporize attractive scents. True flowers have significantly lower heat production and thermal conductance, because closed petals retain heat that benefits resident insects. The florets on aroid spadices, either within a floral chamber or spathe, have intermediate thermal conductance, consistent with mixed roles. Mass-specific rates of respiration are variable between species, but reach 900 nmol s(-1) g(-1) in aroid male florets, exceeding rates of all other plants and even most animals. Maximum mass-specific respiration appears to be limited by oxygen delivery through individual cells. Reducing mass-specific respiration may be one selective influence on the evolution of large size of thermogenic flowers.
2010-10-01
... vent, maximum trap size, and ghost panel requirements. 697.21 Section 697.21 Wildlife and Fisheries... identification and marking, escape vent, maximum trap size, and ghost panel requirements. (a) Gear identification... Administrator finds to be consistent with paragraph (c) of this section. (d) Ghost panel. (1) Lobster traps...
Gutenberg-Richter b-value maximum likelihood estimation and sample size
Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.
2017-01-01
The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.
Mikosch, Thomas Valentin; Moser, Martin
2013-01-01
We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting on...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....
Lihui Guo
2015-01-01
Full Text Available With the increasing penetration of wind power, the randomness and volatility of wind power output would have a greater impact on safety and steady operation of power system. In allusion to the uncertainty of wind speed and load demand, this paper applied box set robust optimization theory in determining the maximum allowable installed capacity of wind farm, while constraints of node voltage and line capacity are considered. Optimized duality theory is used to simplify the model and convert uncertainty quantities in constraints into certainty quantities. Under the condition of multi wind farms, a bilevel optimization model to calculate penetration capacity is proposed. The result of IEEE 30-bus system shows that the robust optimization model proposed in the paper is correct and effective and indicates that the fluctuation range of wind speed and load and the importance degree of grid connection point of wind farm and load point have impact on the allowable capacity of wind farm.
Effects of Nominal Maximum Aggregate Size on the Performance of Stone Matrix Asphalt
Hongying Liu
2017-01-01
Full Text Available It is well known that the performance of hot mix asphalt (HMA in service life is closely related to a proper aggregate gradation. A laboratory study was conducted to investigate the effects of nominal maximum aggregate size (NMAS on the performance of stone matrix asphalt (SMA. The volumetric characteristics and performance properties obtained from wheel tracking tests, permeability test, beam bending test, contabro test are compared for SMA mixes with different NMAS. The results indicated that voids in mineral aggregate (VMA and voids filled with asphalt (VFA of SMA mixtures increased with a decrease of aggregate size in aggregate gradation. SMA30 had the lowest optimum asphalt content among all the mixtures. Increase of NMAS contributed to improvement of the rutting resistance of SMA mixtures. However, a decrease of NMAS showed better cracking and raveling resistance. Permeability rate of SMA was primarily affected by the air voids (AV and break point sieve, but was also sensitive to aggregate gradation to some extent, with reduced NMAS corresponding to less permeability rate. Based on the test results, SMA5 and SMA13 are suggested to be used as a water-proof layer in bridge deck pavement, and SMA20 and SMA30 are suggested to be used as binder course in asphalt pavement, which needs to possess superior rutting resistance at high temperature.
Setting the Renormalization Scale in QCD: The Principle of Maximum Conformality
Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins; Di Giustino, Leonardo; /SLAC
2011-08-19
A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale {mu} of the running coupling {alpha}{sub s}({mu}{sup 2}): The purpose of the running coupling in any gauge theory is to sum all terms involving the {beta} function; in fact, when the renormalization scale is set properly, all non-conformal {beta} {ne} 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with {beta} = 0. The resulting scale-fixed predictions using the 'principle of maximum conformality' (PMC) are independent of the choice of renormalization scheme - a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale-setting in the Abelian limit. The PMC is also the theoretical principle underlying the BLM procedure, commensurate scale relations between observables, and the scale-setting method used in lattice gauge theory. The number of active flavors nf in the QCD {beta} function is also correctly determined. We discuss several methods for determining the PMC/BLM scale for QCD processes. We show that a single global PMC scale, valid at leading order, can be derived from basic properties of the perturbative QCD cross section. The elimination of the renormalization scheme ambiguity using the PMC will not only increase the precision of QCD tests, but it will also increase the sensitivity of collider experiments to new physics beyond the Standard Model.
Morrison, Glenn; Shaughnessy, Richard; Shu, Shi
2011-02-01
A Monte Carlo analysis of indoor ozone levels in four cities was applied to provide guidance to regulatory agencies on setting maximum ozone emission rates from consumer appliances. Measured distributions of air exchange rates, ozone decay rates and outdoor ozone levels at monitoring stations were combined with a steady-state indoor air quality model resulting in emission rate distributions (mg h -1) as a function of % of building hours protected from exceeding a target maximum indoor concentration of 20 ppb. Whole-year, summer and winter results for Elizabeth, NJ, Houston, TX, Windsor, ON, and Los Angeles, CA exhibited strong regional differences, primarily due to differences in air exchange rates. Infiltration of ambient ozone at higher average air exchange rates significantly reduces allowable emission rates, even though air exchange also dilutes emissions from appliances. For Houston, TX and Windsor, ON, which have lower average residential air exchange rates, emission rates ranged from -1.1 to 2.3 mg h -1 for scenarios that protect 80% or more of building hours from experiencing ozone concentrations greater than 20 ppb in summer. For Los Angeles, CA and Elizabeth, NJ, with higher air exchange rates, only negative emission rates were allowable to provide the same level of protection. For the 80th percentile residence, we estimate that an 8-h average limit concentration of 20 ppb would be exceeded, even in the absence of an indoor ozone source, 40 or more days per year in any of the cities analyzed. The negative emission rates emerging from the analysis suggest that only a zero-emission rate standard is prudent for Los Angeles, Elizabeth, NJ and other regions with higher summertime air exchange rates. For regions such as Houston with lower summertime air exchange rates, the higher emission rates would likely increase occupant exposure to the undesirable products of ozone reactions, thus reinforcing the need for zero-emission rate standard.
Younk, Patrick; Risse, Markus
2012-07-01
The composition of ultra-high energy cosmic rays is an important issue in astroparticle physics research, and additional experimental results are required for further progress. Here we investigate what can be learned from the statistical correlation factor r between the depth of shower maximum and the muon shower size, when these observables are measured simultaneously for a set of air showers. The correlation factor r contains the lowest-order moment of a two-dimensional distribution taking both observables into account, and it is independent of systematic uncertainties of the absolute scales of the two observables. We find that, assuming realistic measurement uncertainties, the value of r can provide a measure of the spread of masses in the primary beam. Particularly, one can differentiate between a well-mixed composition (i.e., a beam that contains large fractions of both light and heavy primaries) and a relatively pure composition (i.e., a beam that contains species all of a similar mass). The number of events required for a statistically significant differentiation is ˜200. This differentiation, though diluted, is maintained to a significant extent in the presence of uncertainties in the phenomenology of high energy hadronic interactions. Testing whether the beam is pure or well-mixed is well motivated by recent measurements of the depth of shower maximum.
Iden, Sascha C.; Peters, Andre; Durner, Wolfgang
2015-11-01
The prediction of unsaturated hydraulic conductivity from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. One problem for conductivity predictions from retention functions with continuous derivatives, i.e. continuous water capacity functions, is that the hydraulic conductivity curve exhibits a sharp drop close to water saturation if the pore-size distribution is wide. So far this artifact has been ignored or removed by introducing an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable. We present a new parameterization of the hydraulic properties which uses the original saturation function (e.g. of van Genuchten) and introduces a maximum pore radius only in the pore-bundle model. In contrast to models using an explicit air entry, the resulting conductivity function is smooth and increases monotonically close to saturation. The model concept can easily be applied to any combination of retention curve and pore-bundle model. We derive closed-form expressions for the unimodal and multimodal van Genuchten-Mualem models and apply the model concept to curve fitting and inverse modeling of a transient outflow experiment. Since the new model retains the smoothness and continuous differentiability of the retention model and eliminates the sharp drop in conductivity close to saturation, the resulting hydraulic functions are physically more reasonable and ideal for numerical simulations with the Richards equation or multiphase flow models.
Distributions of lifetime and maximum size of abortive clathrin-coated pits
Banerjee, Anand; Berezhkovskii, Alexander; Nossal, Ralph
2012-09-01
Clathrin-mediated endocytosis is a complex process through which eukaryotic cells internalize nutrients, antigens, growth factors, pathogens, etc. The process occurs via the formation of invaginations on the cell membrane, called clathrin-coated pits (CCPs). Over the years, much has been learned about the mechanism of CCP assembly, but a complete understanding of the assembly process still remains elusive. In recent years, using fluorescence microscopy, studies have been done to determine the statistical properties of CCP formation. In this paper, using a recently proposed coarse-grained, stochastic model of CCP assembly [Banerjee, Berezhkovskii, and Nossal, Biophys. J.BIOJAU0006-349510.1016/j.bpj.2012.05.010 102, 2725 (2012)], we suggest new ways of analyzing such experimental data. To be more specific, we derive analytical expressions for the distribution of maximum size of abortive CCPs, and the probability density of their lifetimes. Our results show how these functions depend on the kinetic and energetic parameters characterizing the assembly process, and therefore could be useful in extracting information about the mechanism of CCP assembly from experimental data. We find excellent agreement between our analytical results and those obtained from kinetic Monte Carlo simulations of the assembly process.
Iden, Sascha; Peters, Andre; Durner, Wolfgang
2017-04-01
Soil hydraulic properties are required to solve the Richards equation, the most widely applied model for variably-saturated flow. While the experimental determination of the water retention curve does not pose significant challenges, the measurement of unsaturated hydraulic conductivity is time consuming and costly. The prediction of the unsaturated hydraulic conductivity curve from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. A well-known problem of conductivity prediction for retention functions with wide pore-size distributions is the sharp drop in conductivity close to water saturation. This problematic behavior is well known for the van Genuchten model if the shape parameter n assumes values smaller than about 1.3. So far, the workaround for this artefact has been to introduce an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable and thus a discontinuous water capacity function. We present an improved parametrization of the hydraulic properties which uses the original capillary saturation function and introduces a maximum pore radius only in the pore-bundle model. Closed-form equations for the hydraulic conductivity function were derived for the unimodal and multimodal retention functions of van Genuchten and have been tested by sensitivity analysis and applied in curve fitting and inverse modeling of multistep outflow experiments. The resulting hydraulic conductivity function is smooth, increases monotonically close to saturation, and eliminates the sharp drop in conductivity close to saturation. Furthermore, the new model retains the smoothness and continuous differentiability of the water retention curve. We conclude that the resulting soil hydraulic functions are physically more reasonable than the ones predicted by previous approaches, and are thus ideally suited for numerical simulations
W. M. Macek
2011-05-01
Full Text Available To quantify solar wind turbulence, we consider a generalized two-scale weighted Cantor set with two different scales describing nonuniform distribution of the kinetic energy flux between cascading eddies of various sizes. We examine generalized dimensions and the corresponding multifractal singularity spectrum depending on one probability measure parameter and two rescaling parameters. In particular, we analyse time series of velocities of the slow speed streams of the solar wind measured in situ by Voyager 2 spacecraft in the outer heliosphere during solar maximum at various distances from the Sun: 10, 30, and 65 AU. This allows us to look at the evolution of multifractal intermittent scaling of the solar wind in the distant heliosphere. Namely, it appears that while the degree of multifractality for the solar wind during solar maximum is only weakly correlated with the heliospheric distance, but the multifractal spectrum could substantially be asymmetric in a very distant heliosphere beyond the planetary orbits. Therefore, one could expect that this scaling near the frontiers of the heliosphere should rather be asymmetric. It is worth noting that for the model with two different scaling parameters a better agreement with the solar wind data is obtained, especially for the negative index of the generalized dimensions. Therefore we argue that there is a need to use a two-scale cascade model. Hence we propose this model as a useful tool for analysis of intermittent turbulence in various environments and we hope that our general asymmetric multifractal model could shed more light on the nature of turbulence.
Counting independent sets of a fixed size in graphs with a given minimum degree
Engbers, John
2012-01-01
Galvin showed that for all fixed $\\delta$ and sufficiently large $n$, the $n$-vertex graph with minimum degree $\\delta$ that admits the most independent sets is the complete bipartite graph $K_{\\delta,n-\\delta}$. He conjectured that except perhaps for some small values of $t$, the same graph yields the maximum count of independent sets of size $t$ for each possible $t$. Evidence for this conjecture was recently provided by Alexander, Cutler, and Mink, who showed that for all triples $(n,\\delta, t)$ with $t\\geq 3$, no $n$-vertex {\\em bipartite} graph with minimum degree $\\delta$ admits more independent sets of size $t$ than $K_{\\delta,n-\\delta}$. Here we make further progress. We show that for all triples $(n,\\delta,t)$ with $\\delta \\leq 3$ and $t\\geq 3$, no $n$-vertex graph with minimum degree $\\delta$ admits more independent sets of size $t$ than $K_{\\delta,n-\\delta}$, and we obtain the same conclusion for $\\delta > 3$ and $t \\geq 2\\delta +1$. Our proofs lead us naturally to the study of an interesting famil...
Izumida, Yuki; Okuda, Koji
2014-05-01
We formulate the work output and efficiency for linear irreversible heat engines working between a finite-sized hot heat source and an infinite-sized cold heat reservoir until the total system reaches the final thermal equilibrium state with a uniform temperature. We prove that when the heat engines operate at the maximum power under the tight-coupling condition without heat leakage the work output is just half of the exergy, which is known as the maximum available work extracted from a heat source. As a consequence, the corresponding efficiency is also half of its quasistatic counterpart.
Kai Yan
2015-01-01
Full Text Available A predictive model for droplet size and velocity distributions of a pressure swirl atomizer has been proposed based on the maximum entropy formalism (MEF. The constraint conditions of the MEF model include the conservation laws of mass, momentum, and energy. The effects of liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio on the droplet size and velocity distributions of a pressure swirl atomizer are investigated. Results show that model based on maximum entropy formalism works well to predict droplet size and velocity distributions under different spray conditions. Liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio have different effects on droplet size and velocity distributions of a pressure swirl atomizer.
Poorter, L.; Hawthorne, W.D.; Sheil, D.; Bongers, F.J.J.M.
2008-01-01
The diversity and structure of communities are partly determined by how species partition resource gradients. Plant size is an important indicator of species position along the vertical light gradient in the vegetation. 2. Here, we compared the size distribution of tree species in 44 Ghanaian
Saarinen, Juha J.; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Evans, Alistair R.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Sibly, Richard M.; Stephens, Patrick R.; Theodor, Jessica; Uhen, Mark D.; Smith, Felisa A.
2014-01-01
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing. PMID:24741007
Saarinen, Juha J; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D; Smith, Felisa A
2014-06-07
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing.
Prediction of maximum casting defect size in MAR-M-247 alloy processed by hot isostatic pressing
Miroslav Šmíd
2015-02-01
Full Text Available Nickel based MAR-M-247 superalloy treated by hot isostatic pressing was investigated with the aim to identify the influence of casting defect size on fatigue life. Two testing temperatures of 650 and 800°C and one stress amplitude were chosen for fatigue tests. The Murakami approach and the largest extreme value distribution theory were applied. It has been found that the maximum size of casting defects in a specimen can be satisfactorily predicted. Fatigue life of specimens was in the good agreement with assumptions based on the evaluation and prediction of the casting defect size.
Setting maximum sustainable yield targets when yield of one species affects that of other species
Rindorf, Anna; Reid, David; Mackinson, Steve;
2012-01-01
species. But how should we prioritize and identify most appropriate targets? Do we prefer to maximize by focusing on total yield in biomass across species, or are other measures targeting maximization of profits or preserving high living qualities more relevant? And how do we ensure that targets remain......, industry, managers, and NGO representatives. The workshop was designed to identify variants of maximum sustainable yield (MSY) which account for the necessary trade‐offs and estimate the preferences of the workshop participants for each of these variants across five regional groups: the Baltic Sea...
Setting maximum sustainable yield targets when yield of one species affects that of other species
Rindorf, Anna; Reid, David; Mackinson, Steve
2012-01-01
species. But how should we prioritize and identify most appropriate targets? Do we prefer to maximize by focusing on total yield in biomass across species, or are other measures targeting maximization of profits or preserving high living qualities more relevant? And how do we ensure that targets remain......, industry, managers, and NGO representatives. The workshop was designed to identify variants of maximum sustainable yield (MSY) which account for the necessary trade‐offs and estimate the preferences of the workshop participants for each of these variants across five regional groups: the Baltic Sea...
On the maximum orders of an induced forest, an induced tree, and a stable set
Hertz Alain
2014-01-01
Full Text Available Let G be a connected graph, n the order of G, and f (resp. t the maximum order of an induced forest (resp. tree in G. We show that f - t is at most n - 2√n-1. In the special case where n is of the form a2 + 1 for some even integer a ≥ 4, f - t is at most n - 2√n-1-1. We also prove that these bounds are tight. In addition, letting α denote the stability number of G, we show that α - t is at most n + 1- 2√2n this bound is also tight.
On the maximum rate of change in sunspot number growth and the size of the sunspot cycle
Wilson, Robert M.
1990-01-01
Statistically significant correlations exist between the size (maximum amplitude) of the sunspot cycle and, especially, the maximum value of the rate of rise during the ascending portion of the sunspot cycle, where the rate of rise is computed either as the difference in the month-to-month smoothed sunspot number values or as the 'average rate of growth' in smoothed sunspot number from sunspot minimum. Based on the observed values of these quantities (equal to 10.6 and 4.63, respectively) as of early 1989, it is inferred that cycle 22's maximum amplitude will be about 175 + or - 30 or 185 + or - 10, respectively, where the error bars represent approximately twice the average error found during cycles 10-21 from the two fits.
Draxler, Clemens; Alexandrowicz, Rainer W
2015-12-01
This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.
Prediction of maximum casting defect size in MAR-M-247 alloy processed by hot isostatic pressing
Miroslav Šmíd; Stanislava Fintová; Ludvík Kunz; Pavel Hutař; Karel Hrbáček
2015-01-01
Nickel based MAR-M-247 superalloy treated by hot isostatic pressing was investigated with the aim to identify the influence of casting defect size on fatigue life. Two testing temperatures of 650 and 800°C and one stress amplitude were chosen for fatigue tests. The Murakami approach and the largest extreme value distribution theory were applied. It has been found that the maximum size of casting defects in a specimen can be satisfactorily predicted. Fatigue life of specimens was in the good a...
Checa-Garcia, Ramiro
2013-01-01
The main challenges of measuring precipitation are related to the spatio-temporal variability of the drop-size distribution, to the uncertainties that condition the modeling of that distribution, and to the instrumental errors present in the in situ estimations. This PhD dissertation proposes advances in all these questions. The relevance of the spatial variability of the drop-size distribution for remote sensing measurements and hydro-meteorology field studies is asserted by analyzing the measurement of a set of disdrometers deployed on a network of 5 squared kilometers. This study comprises the spatial variability of integral rainfall parameters, the ZR relationships, and the variations within the one moment scaling method. The modeling of the drop-size distribution is analyzed by applying the MaxEnt method and comparing it with the methods of moments and the maximum likelihood. The instrumental errors are analyzed with a compressive comparison of sampling and binning uncertainties that affect actual device...
K. Seshadri Sastry
2013-06-01
Full Text Available This paper presents Adaptive Population Sizing Genetic Algorithm (AGA assisted Maximum Likelihood (ML estimation of Orthogonal Frequency Division Multiplexing (OFDM symbols in the presence of Nonlinear Distortions. The proposed algorithm is simulated in MATLAB and compared with existing estimation algorithms such as iterative DAR, decision feedback clipping removal, iteration decoder, Genetic Algorithm (GA assisted ML estimation and theoretical ML estimation. Simulation results proved that the performance of the proposed AGA assisted ML estimation algorithm is superior compared with the existing estimation algorithms. Further the computational complexity of GA assisted ML estimation increases with increase in number of generations or/and size of population, in the proposed AGA assisted ML estimation algorithm the population size is adaptive and depends on the best fitness. The population size in GA assisted ML estimation is fixed and sufficiently higher size of population is taken to ensure good performance of the algorithm but in proposed AGA assisted ML estimation algorithm the size of population changes as per requirement in an adaptive manner thus reducing the complexity of the algorithm.
Hills, Thomas T; Noguchi, Takao; Gibbert, Michael
2013-10-01
How do changes in choice-set size influence information search and subsequent decisions? Moreover, does information overload influence information processing with larger choice sets? We investigated these questions by letting people freely explore sets of gambles before choosing one of them, with the choice sets either increasing or decreasing in number for each participant (from two to 32 gambles). Set size influenced information search, with participants taking more samples overall, but sampling a smaller proportion of gambles and taking fewer samples per gamble, when set sizes were larger. The order of choice sets also influenced search, with participants sampling from more gambles and taking more samples overall if they started with smaller as opposed to larger choice sets. Inconsistent with information overload, information processing appeared consistent across set sizes and choice order conditions, reliably favoring gambles with higher sample means. Despite the lack of evidence for information overload, changes in information search did lead to systematic changes in choice: People who started with smaller choice sets were more likely to choose gambles with the highest expected values, but only for small set sizes. For large set sizes, the increase in total samples increased the likelihood of encountering rare events at the same time that the reduction in samples per gamble amplified the effect of these rare events when they occurred-what we call search-amplified risk. This led to riskier choices for individuals whose choices most closely followed the sample mean.
Kupczewska-Dobecka, Małgorzata; Soćko, Renata; Czerczak, Sławomir
2006-01-01
The aim of this work is to analyse Maximum Admissible Concentration (MAC) values proposed for irritants by the Group of Experts for Chemical Agents in Poland, based on the RD50 value. In 1994-2004, MAC values for irritants based on the RD50 value were set for 17 chemicals. For the purpose of the analysis, 1/10 RD50, 1/100 RD50 and the MAC/RD50 ratio were calculated. The determined MAC values are within the 0.01-0.09 RD50 range. The RD50 value is a good rough criterion to set MAC values for irritants and it makes it possible to estimate quickly admissible exposure levels. It has become clear that, in some cases, simple setting the MAC value for an irritant at the level of 0.03 RD50 may be insufficient to determine precisely the possible hazard to workers' health. Other available toxicological data, such as NOAEL (No-Observed-Adverse-Effect Level) and LOAEL (Lowest-Observed-Adverse-Effect Level), should always be considered as well.
Wu, Swei-Pi; Ho, Cheng-Pin; Yen, Chin-Li
2011-01-01
A wok with a straight handle is one of the most common cooking utensils in the Asian kitchen. This common cooking instrument has seldom been examined by ergonomists. This research used a two-factor randomized complete block design to investigate the effects of wok size (with three diameters - 36 cm, 39 cm and 42 cm) and handle angle (25°, 10°, -5°, -20°, and -35°) on the task of flipping. The measurement criteria included the maximum acceptable weight of wok flipping (MAWF), the subjective rating and the subjective ranking. Twelve experienced males volunteered to take part in this study. The results showed that both the wok size and handle angle had a significant effect on the MAWF, the subjective rating and the subjective ranking. Additionally, there is a size-weight illusion associated with flipping tasks. In general, a small wok (36 cm diameter) with an ergonomically bent handle (-20° ± 15°) is the optimal design, for male cooks, for the purposes of flipping.
Persson, Lennart; Elliott, J Malcolm
2013-05-01
The theory of cannibal dynamics predicts a link between population dynamics and individual life history. In particular, increased individual growth has, in both modeling and empirical studies, been shown to result from a destabilization of population dynamics. We used data from a long-term study of the dynamics of two leech (Erpobdella octoculata) populations to test the hypothesis that maximum size should be higher in a cycling population; one of the study populations exhibited a delayed feedback cycle while the other population showed no sign of cyclicity. A hump-shaped relationship between individual mass of 1-year-old leeches and offspring density the previous year was present in both populations. As predicted from the theory, the maximum mass of individuals was much larger in the fluctuating population. In contrast to predictions, the higher growth rate was not related to energy extraction from cannibalism. Instead, the higher individual mass is suggested to be due to increased availability of resources due to a niche widening with increased individual body mass. The larger individual mass in the fluctuating population was related to a stronger correlation between the densities of 1-year-old individuals and 2-year-old individuals the following year in this population. Although cannibalism was the major mechanism regulating population dynamics, its importance was negligible in terms of providing cannibalizing individuals with energy subsequently increasing their fecundity. Instead, the study identifies a need for theoretical and empirical studies on the largely unstudied interplay between ontogenetic niche shifts and cannibalistic population dynamics.
Albuquerque, Fabio; Beier, Paul
2015-01-01
Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.
Bernstein, Joshua G W; Summers, Van; Iyer, Nandini; Brungart, Douglas S
2012-10-01
Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups.
Tareef K. Mustafa
2010-01-01
Full Text Available Problem statement: Stylometric authorship attribution is an approach concerned about analyzing texts in text mining, e.g., novels and plays that famous authors wrote, trying to measure the authors style, by choosing some attributes that shows the author style of writing, assuming that these writers have a special way of writing that no other writer has; thus, authorship attribution is the task of identifying the author of a given text. In this study, we propose an authorship attribution algorithm, improving the accuracy of Stylometric features of different professionals so it can be discriminated nearly as well as fingerprints of different persons using authorship attributes. Approach: The main target in this study is to build an algorithm supports a decision making systems enables users to predict and choose the right author for a specific anonymous author's novel under consideration, by using a learning procedure to teach the system the Stylometric map of the author and behave as an expert opinion. The Stylometric Authorship Attribution (AA usually depends on the frequent word as the best attribute that could be used, many studies strived for other beneficiary attributes, still the frequent word is ahead of other attributes that gives better results in the researches and experiments and still the best parameter and technique that's been used till now is the counting of the bag-of-word with the maximum item set. Results: To improve the techniques of the AA, we need to use new pack of attributes with a new measurement tool, the first pack of attributes we are using in this study is the (frequent pair which means a pair of words that always appear together, this attribute clearly is not a new one, but it wasn't a successive attribute compared with the frequent word, using the maximum item set counters. the words pair made some mistakes as we see in the experiment results, improving the winnow algorithm by combining it with the computational
Choice set size and decision making: the case of Medicare Part D prescription drug plans.
Bundorf, M Kate; Szrek, Helena
2010-01-01
The impact of choice on consumer decision making is controversial in US health policy. The authors' objective was to determine how choice set size influences decision making among Medicare beneficiaries choosing prescription drug plans. The authors randomly assigned members of an Internet-enabled panel age 65 and older to sets of prescription drug plans of varying sizes (2, 5, 10, and 16) and asked them to choose a plan. Respondents answered questions about the plan they chose, the choice set, and the decision process. The authors used ordered probit models to estimate the effect of choice set size on the study outcomes. Both the benefits of choice, measured by whether the chosen plan is close to the ideal plan, and the costs, measured by whether the respondent found decision making difficult, increased with choice set size. Choice set size was not associated with the probability of enrolling in any plan. Medicare beneficiaries face a tension between not wanting to choose from too many options and feeling happier with an outcome when they have more alternatives. Interventions that reduce cognitive costs when choice sets are large may make this program more attractive to beneficiaries.
Huang, Yu
Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.
Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W
2016-01-01
The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.
Primed to be inflexible: The influence of set size on cognitive flexibility during childhood
Lily eFitzGibbon
2014-02-01
Full Text Available One of the hallmarks of human cognition is cognitive flexibility, the ability to adapt thoughts and behaviors according to changing task demands. Previous research has suggested that the number of different exemplars that must be processed within a task (the set size can influence an individual’s ability to switch flexibly between different tasks. This paper provides evidence that when tasks have a small set size, children’s cognitive flexibility is impaired compared to when tasks have a large set size. This paper also offers insights into the mechanism by which this effect comes about. Understanding how set size interacts with task-switching informs the debate regarding the relative contributions of bottom-up priming and top-down control processes in the development of cognitive flexibility. We tested two accounts for the relationship between set size and cognitive flexibility: the (bottom-up Stimulus-Task Priming account and the (top-down Rule Representation account. Our findings offered support for the Stimulus-Task Priming account, but not for the Rule Representation account. They suggest that children are susceptible to bottom-up priming caused by stimulus repetition, and that this priming can impair their ability to switch between tasks. These findings make important theoretical and practical contributions to the executive function literature: Theoretically, they show that the basic features of a task exert a significant influence on children’s ability to flexibly shift between tasks through bottom-up priming effects. Practically, they suggest that children’s cognitive flexibility may have been underestimated relative to adults’, as paradigms used with children typically have a smaller set size than those used with adults. These findings also have applications in education, where they have the potential to inform teaching in key areas where cognitive flexibility is required, such as mathematics and literacy.
Abadi, Ali Salehi Sahl; Mazlomi, Adel; Saraji, Gebraeil Nasl; Zeraati, Hojjat; Hadian, Mohammad Reza; Jafari, Amir Homayoun
2015-10-01
In spite of the widespread use of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The emphasis on ergonomics in MMH tasks is due to the potential risks of workplace accidents and injuries. This study aimed to assess the effect of box size, frequency of lift, and height of lift on maximum acceptable weight of lift (MAWL) on the heart rates of male university students in Iran. This experimental study was conducted in 2015 with 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks that involved three lifting frequencies (1lift/min, 4.3 lifts/min and 6.67 lifts/min), three lifting heights (floor to knuckle, knuckle to shoulder, and shoulder to arm reach), and two box sizes. Each set of experiments was conducted during the 20 min work period using the free-style lifting technique. The working heart rates (WHR) were recorded for the entire duration. In this study, we used SPSS version 18 software and descriptive statistical methods, analysis of variance (ANOVA), and the t-test for data analysis. The results of the ANOVA showed that there was a significant difference between the mean of MAWL in terms of frequencies of lifts (p = 0.02). Tukey's post hoc test indicated that there was a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0. 01). There was a significant difference between the mean heart rates in terms of frequencies of lifts (p = 0.006), and Tukey's post hoc test indicated a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0.004). But, there was no significant difference between the mean of MAWL and the mean heart rate in terms of lifting heights (p > 0.05). The results of the t-test showed that there was a significant difference between the mean of MAWL and the mean heart rate in terms of the sizes of the two boxes (p = 0.000). Based on the results of
Fatty Acid Availability Sets Cell Envelope Capacity and Dictates Microbial Cell Size.
Vadia, Stephen; Tse, Jessica L; Lucena, Rafael; Yang, Zhizhou; Kellogg, Douglas R; Wang, Jue D; Levin, Petra Anne
2017-06-19
Nutrients-and by extension biosynthetic capacity-positively impact cell size in organisms throughout the tree of life. In bacteria, cell size is reduced 3-fold in response to nutrient starvation or accumulation of the alarmone ppGpp, a global inhibitor of biosynthesis. However, whether biosynthetic capacity as a whole determines cell size or whether particular anabolic pathways are more important than others remains an open question. Here we identify fatty acid synthesis as the primary biosynthetic determinant of Escherichia coli size and present evidence supporting a similar role for fatty acids as a positive determinant of size in the Gram-positive bacterium Bacillus subtilis and the single-celled eukaryote Saccharomyces cerevisiae. Altering fatty acid synthesis recapitulated the impact of altering nutrients on cell size and morphology, whereas defects in other biosynthetic pathways had either a negligible or fatty-acid-dependent effect on size. Together, our findings support a novel "outside-in" model in which fatty acid availability sets cell envelope capacity, which in turn dictates cell size. In the absence of ppGpp, limiting fatty acid synthesis leads to cell lysis, supporting a role for ppGpp as a linchpin linking expansion of cytoplasmic volume to the growth of the cell envelope to preserve cellular integrity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Determination of size-specific exposure settings in dental cone-beam CT.
Pauwels, Ruben; Jacobs, Reinhilde; Bogaerts, Ria; Bosmans, Hilde; Panmekiate, Soontra
2017-01-01
To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. • Fixed exposure settings in CBCT results in overexposure for smaller patients • For children, considerable dose reduction is possible without compromising image quality • A reduction in mAs is more dose-efficient than a kV reduction • An optimized exposure protocol was proposed based on phantom measurements • This protocol should be validated in a clinical setting.
Using Genetic Algorithms to Optimise Rough Set Partition Sizes for HIV Data Analysis
Crossingham, Bodie
2007-01-01
In this paper, we present a method to optimise rough set partition sizes, to which rule extraction is performed on HIV data. The genetic algorithm optimisation technique is used to determine the partition sizes of a rough set in order to maximise the rough sets prediction accuracy. The proposed method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. Six demographic variables were used in the analysis, these variables are; race, age of mother, education, gravidity, parity, and age of father, with the outcome or decision being either HIV positive or negative. Rough set theory is chosen based on the fact that it is easy to interpret the extracted rules. The prediction accuracy of equal width bin partitioning is 57.7% while the accuracy achieved after optimising the partitions is 72.8%. Several other methods have been used to analyse the HIV data and their results are stated and compared to that of rough set theory (RST).
The influence of poster prompts on stair use: The effects of setting, poster size and content.
Kerr, Jacqueline; Eves, Frank F.; Carroll, Douglas
2001-11-01
OBJECTIVES: There is evidence that poster prompts increase stair use. The present study was concerned with the effects of poster size, poster message, and setting on stair use. DESIGN: Using a quasi-experimental design, four observational studies were undertaken in which stair and escalator use were logged during 2-week baseline periods and 2-week intervention periods. METHODS: In the first two studies, observations were undertaken in two shopping centres (total N = 30,018) with the size of poster varying. In the other two studies (total N = 37,907), one in a shopping centre and one in a train station, two poster messages were tested in both sites. RESULTS: Pedestrian traffic volume was controlled for statistically. There were significant increases in stair use with A1- and A2-, but not A3-size posters. Overall, the two different poster messages were both effective in encouraging stair use. Interactions between gender and message setting, however, reflected the fact that the 'stay healthy, save time' poster had little impact on female shoppers but was highly effective for female commuters. CONCLUSION: These results suggest that developers of health-promotion posters pay attention to poster size. They also indicate that it is insufficient to segment audiences by gender without considering the setting and motivational context.
Determination of size-specific exposure settings in dental cone-beam CT
Pauwels, Ruben [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand); University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Jacobs, Reinhilde [University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Bogaerts, Ria [University of Leuven, Laboratory of Experimental Radiotherapy, Department of Oncology, Biomedical Sciences Group, Leuven (Belgium); Bosmans, Hilde [University of Leuven, Medical Physics and Quality Assessment, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Panmekiate, Soontra [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand)
2017-01-15
To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. (orig.)
Hun-Ki Chung; Kyu-Won Kim; Jong-Wook Chung; Jung-Ro Lee; Sok-Young Lee; Anupam Dixit; Hee-Kyoung Kang; Weiguo Zhao; Kenneth L. McNally; Ruraidh S. Hamilton; Jae-Gyun Gwag; Yong-Jin Park
2009-01-01
A new heuristic approach was undertaken for the establishment of a core set for the diversity research of rice. As a result,107 entries were selected from the 10 368 characterized accessions. The core set derived using this new approach provideda good representation of the characterized accessions present in the entire collection. No significant differences for the mean, range, standard deviation and coefficient of variation of each trait were observed between the core and existing collections. We also compared the diversity of core sets established using this Heuristic Core Collection (HCC) approach with those of core sets established using the conventional clustering methods. This modified heuristic algorithm can also be used to select genotype data with allelic richness and reduced redundancy, and to facilitate management and use of large collections of plant genetic resources in a more efficient way.
Influence of voxel size settings in X-Ray CT Imagery of soil in scaling properties
Heck, R.; Scaiff, N. T.; Andina, D.; Tarquis, A. M.
2012-04-01
Fundamental to the interpretation and comparison of X-ray CT imagery of soil is recognition of the objectivity and consistency of procedures used to generate the 3D models. Notably, there has been a lack of consistency in the size of voxels used for diverse interpretations of soils features and processes; in part, this is due to the ongoing evolution of instrumentation and computerized image processing capacity. Moreover, there is still need for discussion on whether standard voxels sizes should be recommended, and what those would be. Regardless of any eventual adoption of such standards, there is a need to also consider the manner in which voxel size is set in the 3D imagery. In the typical approaches to X-ray CT imaging, voxel size may be set at three stages: image acquisition (involving the position of the sample relative to the tube and detector), image reconstruction (where binning of pixels in the acquired images may occur), as well as post-reconstruction re-sampling (which may involve algorithms such as tri-cubic convolution). This research evaluates and compares the spatial distribution of intra-aggregate voids in 3D imagery as well as their scaling properties, of equivalent voxel size, generated using various combinations of the afore-mentioned approaches. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. AGL2010-21501/AGR is greatly appreciated.
Enumerating Set Partitions According to the Number of Descents of Size or more
Toufik Mansour; Mark Shattuck; Chunwei Song
2012-11-01
Let (,) denote the set of partitions of $\\{1,2,\\ldots,n\\}$ having exactly blocks. In this paper, we find the generating function which counts the members of (,) according to the number of descents of size or more, where ≥ 1 is fixed. An explicit expression in terms of Stirling numbers of the second kind may be given for the total number of such descents in all the members of (,). We also compute the generating function for the statistics recording the number of ascents of size or more and show that it has the same distribution on (,) as the prior statistics for descents when ≥ 2, by both algebraic and combinatorial arguments.
Franks, Peter J; Drake, Paul L; Beerling, David J
2009-01-01
.... However, using basic equations for gas diffusion through stomata of different sizes, we show that a negative correlation between S and D offers several advantages, including plasticity in gwmax...
A decomposition based on path sets for the Multi-Commodity k-splittable Maximum Flow Problem
Gamst, Mette
Switching. In the literature, the problem is solved to optimality using branch-and-price algorithms built on path-based Dantzig-Wolfe decompositions. This paper proposes a new branch-and-price algorithm built on a path set-based Dantzig-Wolfe decomposition. A path set consists of up to k paths, each...... carrying a certain amount of flow. The new branch-and-price algorithm is implemented and compared to the leading algorithms in the literature. Results for the proposed method are competitive and the method even has best performance on some instances. However, the results also indicate some scaling issues....
Held, Louis F.; Pritchard, Ernest I.
1946-01-01
An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.
Reudler Talsma, J.H.; Elzinga, J.A.; Harvey, J.A.; Biere, A.
2007-01-01
Host size is considered a reliable indicator of host quality and an important determinant of parasitoid fitness. Koinobiont parasitoids attack hosts that continue feeding and growing during parasitism. In contrast with hemolymph-feeding koinobionts, tissue-feeding koinobionts face not only a minimum
The (Ir)relevance of Group Size in Health Care Priority Setting: A Reply to Juth.
Sandman, Lars; Gustavsson, Erik
2017-03-01
How to handle orphan drugs for rare diseases is a pressing problem in current health-care. Due to the group size of patients affecting the cost of treatment, they risk being disadvantaged in relation to existing cost-effectiveness thresholds. In an article by Niklas Juth it has been argued that it is irrelevant to take indirectly operative factors like group size into account since such a compensation would risk discounting the use of cost, a relevant factor, altogether. In this article we analyze Juth's argument and observe that we already do compensate for indirectly operative factors, both outside and within cost-effectiveness evaluations, for formal equality reasons. Based on this we argue that we have reason to set cost-effectiveness thresholds to integrate equity concerns also including formal equality considerations. We find no reason not to compensate for group size to the extent we already compensate for other factors. Moreover, groups size implying a systematic disadvantage also on a global scale, i.e. taking different aspects of the health condition of patients suffering from rare diseases into account, will provide strong reason for why group size is indeed relevant to compensate for (if anything).
Sandel, Brody Steven; Arge, Lars Allan; Svenning, J.-C.
Contemporary patterns of species distributions are influenced by both current and historical conditions. Historically unstable climates can lead to reductions in species richness, when species go extinct because they cannot track climate changes, when dispersal limitation causes species to fail...... to fully occupy suitable habitat, or when local diversification rates are depressed by local population extinctions and changing selective regimes. Locations with long-term climate instability should therefore show reduced species richness with small-ranged species particularly missing from the community....... We used a novel measure of climate stability, climate change velocity, which combines information on temporal and spatial gradients in climate to describe the rate at which a particular climate condition is moving over the surface of the Earth. Climate change velocity since the Last Glacial Maximum...
Vermeij, Geerat J.
2016-01-01
Large consumers have ecological influence disproportionate to their abundance, although this influence in food webs depends directly on productivity. Evolutionary patterns at geologic timescales inform expectations about the relationship between consumers and productivity, but it is very difficult to track productivity through time with direct, quantitative measures. Based on previous work that used the maximum body size of Cenozoic marine invertebrate assemblages as a proxy for benthic productivity, we investigated how the maximum body size of Cenozoic marine mammals, in two feeding guilds, evolved over comparable temporal and geographical scales. First, maximal size in marine herbivores remains mostly stable and occupied by two different groups (desmostylians and sirenians) over separate timeframes in the North Pacific Ocean, while sirenians exclusively dominated this ecological mode in the North Atlantic. Second, mysticete whales, which are the largest Cenozoic consumers in the filter-feeding guild, remained in the same size range until a Mio-Pliocene onset of cetacean gigantism. Both vertebrate guilds achieved very large size only recently, suggesting that different trophic mechanisms promoting gigantism in the oceans have operated in the Cenozoic than in previous eras. PMID:27381883
Selection of MOSFET Sizes by Fuzzy Sets Intersection in the Feasible Solutions Space
S. Polanco-Martagón
2012-06-01
Full Text Available A fuzzy sets intersection procedure to select the optimum sizes of analog circuits composed of metal-oxidesemiconductorfield-effect-transistors (MOSFETs, is presented. The cases of study are voltage followers (VFs and acurrent-feedback operational amplifier (CFOA, where the width (W and length (L of the MOSFETs are selected fromthe space of feasible solutions computed by swarm or evolutionary algorithms. The evaluation of three objectives,namely: gain, bandwidth and power consumption; is performed using HSPICETM with standard integrated circuit (ICtechnology of 0.35μm for the VFs and 180nm for the CFOA. Therefore, the intersection procedure among three fuzzysets representing “gain close to unity”, ”high bandwidth” and “minimum power consumption”, is presented. The mainadvantage relies on its usefulness to select feasible W/L sizes automatically but by considering deviation percentagesfrom the desired target specifications. Basically, assigning a threshold to each fuzzy set does it. As a result, theproposed approach selects the best feasible sizes solutions to guarantee and to enhance the performances of the ICsin analog signal processing applications.
Analytical Solution for the Size of the Minimum Dominating Set in Complex Networks
Nacher, Jose C
2016-01-01
Domination is the fastest-growing field within graph theory with a profound diversity and impact in real-world applications, such as the recent breakthrough approach that identifies optimized subsets of proteins enriched with cancer-related genes. Despite its conceptual simplicity, domination is a classical NP-complete decision problem which makes analytical solutions elusive and poses difficulties to design optimization algorithms for finding a dominating set of minimum cardinality in a large network. Here we derive for the first time an approximate analytical solution for the density of the minimum dominating set (MDS) by using a combination of cavity method and Ultra-Discretization (UD) procedure. The derived equation allows us to compute the size of MDS by only using as an input the information of the degree distribution of a given network.
The impact of affect on willingness-to-pay and desired-set-size.
Hafenbrädl, Sebastian; Hoffrage, Ulrich; White, Chris M
2013-01-01
What role does affect play in economic decision making? Previous research showed that the number of items had a linear effect on the willingness-to-pay for those items when participants were computationally primed, whereas participants' willingness-to-pay was insensitive to the amount when they were affectively primed. We extend this research by also studying the impact of affect on nonmonetary costs of waiting for items to be displayed and of screening them in a computer task. We assessed these costs by asking participants how many items they desired to see before making their selection. In our experiment, the effect of priming on desired-set-size was even larger than on willingness-to-pay, which can be explained by the fact that the nonmonetary costs, waiting time, were real, whereas willingness-to-pay was hypothetical. Participants also reported their satisfaction with the choosing process and the chosen items; no linear or nonlinear relationship was found between the self-determined desired-set-size and satisfaction.
On the Maximum Area Hexagon in a Planar Point Set%平面有限点集中的最大面积六边形
杜亚涛; 张士军; 冯杏芳
2012-01-01
若平面上的有限点集构成凸多边形的顶点集,则称此有限点集处于凸位置.令P表示平面上处于凸位置的有限点集,研究了P的子集所确定的凸六边形的面积与CH(P)面积比值的最大值问题.%A finite set of points in the plane is described as in convex position if it forms the set of vertices of a convex polygon. This work studies the ratio between the maximum area of convex hexagons with vertices in P and the area of the convex hull of P,where the planar point set P is in convex position.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Kim, Hye-Young; Shin, Yeonsoon; Han, Sanghoon
2014-04-01
It has been proposed that choice utility exhibits an inverted U-shape as a function of the number of options in the choice set. However, most researchers have so far only focused on the "physically extant" number of options in the set while disregarding the more important psychological factor, the "subjective" number of options worth considering to choose-that is, the size of the consideration set. To explore this previously ignored aspect, we examined how variations in the size of a consideration set can produce different affective consequences after making choices and investigated the underlying neural mechanism using fMRI. After rating their preferences for art posters, participants made a choice from a presented set and then reported on their level of satisfaction with their choice and the level of difficulty experienced in choosing it. Our behavioral results demonstrated that enlarged assortment set can lead to greater choice satisfaction only when increases in both consideration set size and preference contrast are involved. Moreover, choice difficulty is determined based on the size of an individual's consideration set rather than on the size of the assortment set, and it decreases linearly as a function of the level of contrast among alternatives. The neuroimaging analysis of choice-making revealed that subjective consideration set size was encoded in the striatum, the dACC, and the insula. In addition, the striatum also represented variations in choice satisfaction resulting from alterations in the size of consideration sets, whereas a common neural specificity for choice difficulty and consideration set size was shown in the dACC. These results have theoretical and practical importance in that it is one of the first studies investigating the influence of the psychological attributes of choice sets on the value-based decision-making process.
Potvin, Jean; Goldbogen, Jeremy A; Shadwick, Robert E
2012-01-01
Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti) and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae) exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals), the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae), fin (Balaenoptera physalus), blue (Balaenoptera musculus) and minke (Balaenoptera acutorostrata) whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting individual prey
Jean Potvin
Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting
Effect of set size, age, and mode of stimulus presentation on information-processing speed.
Norton, J. C.
1972-01-01
First, second, and third grade pupils served as subjects in an experiment designed to show the effect of age, mode of stimulus presentation, and information value on recognition time. Stimuli were presented in picture and printed word form and in groups of 2, 4, and 8. The results of the study indicate that first graders are slower than second and third graders who are nearly equal. There is a gross shift in reaction time as a function of mode of stimulus presentation with increase in age. The first graders take much longer to identify words than pictures, while the reverse is true of the older groups. With regard to set size, a slope appears in the pictures condition in the older groups, while for first graders, a large slope occurs in the words condition and only a much smaller one for pictures.
Seung, Youl Hun [Dept. of Radiological Science, Cheongju University, Cheongju (Korea, Republic of)
2015-12-15
In this study, we have observed the change of the Hounsfield (HU) in the alteration of by changing in size of physical area and setting size of region of interest (ROI) at focus on kVp and mAs. Four-channel multi-detector computed tomography was used to get transverse axial scanning images and HU. Three dimensional printer which is type of fused deposition modeling (FDM) was used to produce the Phantom. The structure of the phantom was designed to be a type of cylinder that contains 33 mm, 24 mm, 19 mm, 16 mm, 9 mm size of circle holes that are symmetrically located. It was charged with mixing iodine contrast agent and distilled water in the holes. The images were gained with changing by 90 kVp, 120 kVp, 140 kVp and 50 mAs, 100 mAs, 150 mAs, respectively. The ‘image J’ was used to get the HU measurement of gained images of ROI. As a result, it was confirmed that kVp affects to HU more than mAs. And it is suggested that the smaller size of physical area, the more decreasing HU even in material of a uniform density and the smaller setting size of ROI, the more increasing HU. Therefore, it is reason that to set maximum ROI within 5 HU is the best way to minimize in the alteration of by changing in size of physical area and setting size of region of interest.
The critical size is set at a single-cell level by growth rate to attain homeostasis and adaptation.
Ferrezuelo, Francisco; Colomina, Neus; Palmisano, Alida; Garí, Eloi; Gallego, Carme; Csikász-Nagy, Attila; Aldea, Martí
2012-01-01
Budding yeast cells are assumed to trigger Start and enter the cell cycle only after they attain a critical size set by external conditions. However, arguing against deterministic models of cell size control, cell volume at Start displays great individual variability even under constant conditions. Here we show that cell size at Start is robustly set at a single-cell level by the volume growth rate in G1, which explains the observed variability. We find that this growth-rate-dependent sizer is intimately hardwired into the Start network and the Ydj1 chaperone is key for setting cell size as a function of the individual growth rate. Mathematical modelling and experimental data indicate that a growth-rate-dependent sizer is sufficient to ensure size homeostasis and, as a remarkable advantage over a rigid sizer mechanism, it reduces noise in G1 length and provides an immediate solution for size adaptation to external conditions at a population level.
Water distribution from medium-size sprinkler in solid set sprinkler systems
Giuliani do Prado
2016-03-01
Full Text Available ABSTRACT The study aimed to evaluate the water distribution from a medium-size sprinkler working in solid set sprinkler systems. Water distribution radial curves from the sprinkler operating under four nozzle diameter combinations (4.0 x 4.6; 5.0 x 4.6; 6.2 x 4.6 and; 7.1 x 4.6 mm and four working pressures (196; 245; 294 and 343 kPa were evaluated on the sprinkler test bench of the State University of Maringá, in Cidade Gaúcha, Paraná, Brazil. The sixteen water distribution curves were normalized and subjected to clustering analysis (K-Means algorithm, identifying the occurrence of normalized distribution curves with three different geometric shapes. A computer algorithm, in Visual Basic for Applications in Excel spreadsheet, was developed to simulate the water application uniformity (Christiansen's Coefficient - CU from the sprinklers working with rectangular and triangular layouts in solid set sprinkler systems. For the three geometric shapes of the normalized water distribution curves, digital simulation results of water distribution uniformity for the sprinklers on mainline and lateral line spaced between 10 to 100% of wetted diameter indicated that sprinkler spacings around 50% of the wetted diameter provide acceptable CU values.
Dang, Cuong Cao; Le, Vinh Sy; Gascuel, Olivier; Hazes, Bart; Le, Quang Si
2014-10-24
Amino acid replacement rate matrices are a crucial component of many protein analysis systems such as sequence similarity search, sequence alignment, and phylogenetic inference. Ideally, the rate matrix reflects the mutational behavior of the actual data under study; however, estimating amino acid replacement rate matrices requires large protein alignments and is computationally expensive and complex. As a compromise, sub-optimal pre-calculated generic matrices are typically used for protein-based phylogeny. Sequence availability has now grown to a point where problem-specific rate matrices can often be calculated if the computational cost can be controlled. The most time consuming step in estimating rate matrices by maximum likelihood is building maximum likelihood phylogenetic trees from protein alignments. We propose a new procedure, called FastMG, to overcome this obstacle. The key innovation is the alignment-splitting algorithm that splits alignments with many sequences into non-overlapping sub-alignments prior to estimating amino acid replacement rates. Experiments with different large data sets showed that the FastMG procedure was an order of magnitude faster than without splitting. Importantly, there was no apparent loss in matrix quality if an appropriate splitting procedure is used. FastMG is a simple, fast and accurate procedure to estimate amino acid replacement rate matrices from large data sets. It enables researchers to study the evolutionary relationships for specific groups of proteins or taxa with optimized, data-specific amino acid replacement rate matrices. The programs, data sets, and the new mammalian mitochondrial protein rate matrix are available at http://fastmg.codeplex.com.
Kye, Heewon; Sohn, Bong-Soo; Lee, Jeongjin
2012-07-01
Maximum intensity projection (MIP) is an important visualization method that has been widely used for the diagnosis of enhanced vessels or bones by rotating or zooming MIP images. With the rapid spread of multidetector-row computed tomography (MDCT) scanners, MDCT scans of a patient generate a large data set. However, previous acceleration methods for MIP rendering of such a data set failed to generate MIP images at interactive rates. In this paper, we propose novel culling methods in both object and image space for interactive MIP rendering of large medical data sets. In object space, for the visibility test of a block, we propose the initial occluder resulting from a preceding image to utilize temporal coherence and increase the block culling ratio a lot. In addition, we propose the hole filling method using the mesh generation and rendering to improve the culling performance during the generation of the initial occluder. In image space, we find out that there is a trade-off between the block culling ratio in object space and the culling efficiency in image space. In this paper, we classify the visible blocks into two types by their visibility. And we propose a balanced culling method by applying a different culling algorithm in image space for each type to utilize the trade-off and improve the rendering speed. Experimental results on twenty CT data sets showed that our method achieved 3.85 times speed up in average without any loss of image quality comparing with conventional bricking method. Using our visibility culling method, we achieved interactive GPU-based MIP rendering of large medical data sets.
Carli, Lorenzo; Genta, Gianfranco; Cantatore, Angela
2010-01-01
The work deals with an experimental investigation on the influence of three Scanning Electron Microscope (SEM) instrument settings, accelerating voltage, spot size and magnification, on the image formation process. Pixel size and nonlinearity were chosen as output parameters related to image qual...
Boyaval, S.
2000-06-15
This PhD presents a study on a series of high pressure swirl atomizers dedicated to Gasoline Direct Injection (GDI). Measurements are performed in stationary and pulsed working conditions. A great aspect of this thesis is the development of an original experimental set-up to correct multiple light scattering that biases the drop size distributions measurements obtained with a laser diffraction technique (Malvern 2600D). This technique allows to perform a study of drop size characteristics near the injector tip. Correction factors on drop size characteristics and on the diffracted intensities are defined from the developed procedure. Another point consists in applying the Maximum Entropy Formalism (MEF) to calculate drop size distributions. Comparisons between experimental distributions corrected with the correction factors and the calculated distributions show good agreement. This work points out that the mean diameter D{sub 43}, which is also the mean of the volume drop size distribution, and the relative volume span factor {delta}{sub v} are important characteristics of volume drop size distributions. The end of the thesis proposes to determine local drop size characteristics from a new development of deconvolution technique for line-of-sight scattering measurements. The first results show reliable behaviours of radial evolution of local characteristics. In GDI application, we notice that the critical point is the opening stage of the injection. This study shows clearly the effects of injection pressure and nozzle internal geometry on the working characteristics of these injectors, in particular, the influence of the pre-spray. This work points out important behaviours that the improvement of GDI principle ought to consider. (author)
Thorbek, P; Hyder, K
2006-08-01
Residues on foodstuffs resulting from the use of crop-protection products are a function of many factors, e.g. environmental conditions, dissipation and application rate, some of which are linked to the physicochemical properties of the active ingredients. Residue limits (maximum residue levels (MRLs) and tolerances) of fungicides, herbicides and insecticides set by different regulatory authorities are compared, and the relationship between physicochemical properties of the active ingredients and residue limits are explored. This was carried out using simple summary statistics and artificial neural networks. US tolerances tended to be higher than European Union MRLs. Generally, fungicides had the highest residue limits followed by insecticides and herbicides. Physicochemical properties (e.g. aromatic proportion, non-carbon proportion and water solubility) and crop type explained up to 50% of the variation in residue limits. This suggests that physicochemical properties of the active ingredients may control important aspects of the processes leading to residues.
Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher
2008-09-15
We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.
Maximum Wind Energy Capture Algorithm Based on Adaptive Variable Step Size%自适应变步长最大风能捕获算法
李树江; 蔡海锋; 邓金鹏; 孔丽新
2012-01-01
风能具有随机性、不稳定性的特点,为了提高风力发电系统中风能的利用效率,在比较各种最大风能捕获算法的基础上,分析了爬山搜索法和叶尖速比法的不足,提出了自适应变步长搜索算法来捕获最大风能.通过改进爬山搜索法的变步长策略,明显加快了搜索速度,通过引入初始估计叶尖速比值,大大缩小了搜索范围.该算法不需要实时检测准确风速,不依赖风力机最佳功率曲线,有效地降低了成本,提高风力发电的效率.文中重点分析了算法的自适应性和变步长策略,仿真结果表明,该算法能够使风力机更快速到达最大功率点,动态响应快,收敛性好.%Wind power has the characteristics of randomness and instability, to improve the using efficiency of the wind energy, the lack of HCS algorithm and TSR algorithm are analysed based on the comparison of different maximum algorithm, and an adaptive variable step search algorithm is proposed to capture the maximum wind power. The search speed is significantly accelerated by improving? Step size strategy of HCS, the search scope is greatly narrowed through the introduction of TSR of the initial estimates. Real-time detection speed isn' t required in this algorithm, which does not rely on the best wind turbine power curve, effectively reducing the cost and improve the efficiency of wind power. The adaptive and variable step size strategy of this algorithm is analysed in this paper, simulation results show that the algorithm can make wind turbines more quickly reach the maximum power point, fast dynamic response, good convergence.
Child t-shirt size data set from 3D body scanner anthropometric measurements and a questionnaire.
Pierola, A; Epifanio, I; Alemany, S
2017-04-01
A dataset of a fit assessment study in children is presented. Anthropometric measurements of 113 children were obtained using a 3D body scanner. Children tested a t-shirt of different sizes and a different model for boys and girls, and their fit was assessed by an expert. This expert labeled the fit as 0 (correct), -1 (if the garment was small for that child), or 1 (if the garment was large for that child) in an ordered factor called Size-fit. Moreover, the fit was numerically assessed from 1 (very poor fit) to 10 (perfect fit) in a variable called Expert evaluation. This data set contains the differences between the reference mannequin of the evaluated size and the child׳s anthropometric measurements for 27 variables. Besides these variables, in the data set, we can also find the gender, the size evaluated, and the size recommended by the expert, including if an intermediate, but nonexistent size between two consecutive sizes would have been the right size. In total, there are 232 observations. The analysis of these data can be found in Pierola et al. (2016) [2].
无
2003-01-01
Terrigenous components were separated from the bulk sediment of Core A7 from the Okinawa Trough and Core A37 from the Ryukru Trench and grain-size distributions of these sub-samples were analyzed. Based upon an analysis of the grain-size data of the two sedimentary sequences, grain-size populations are identified to be sensitive to sedimentary environmental changes. The modal values and size ranges of the two main grain-size populations in Core A7 are evidently different from those of Core A37, indicating the spatial variability of sediment sources and transport processes between the two places. The downcore variations in the content of the environmentally sensitive grain-size populations reveal that during the accumulation of sedimentary material the environment remained relatively stable at the site where Core A7 was collected, except for the apparent events for the formation of two turbidite layers and a volcanic ash layer. However, the sedimentary sequence of Core A37 shows six sedimentary cycles, indicating a highly variable sedimentary environment at this location.
Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)
2015-05-26
A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R
Effect of glucosamine supplementation on litter size in a commercial setting, NPB project #14-238
Litter size is influenced by ovulation rate, fertilization rate, embryo mortality and uterine capacity. Of these, the most limiting factor is uterine capacity, because increased ovulation rate results in increased number of embryos on day 30 of gestation, but this advantage is lost during later gest...
Shapes and Sizes of Voids in the LCDM Universe: Excursion Set Approach
Shandarin, S; Heitmann, K; Habib, S; Shandarin, Sergei; Feldman, Hume A.; Heitmann, Katrin
2006-01-01
We study the global distribution and morphology of dark matter voids in a LCDM universe using density fields generated by N-body simulations. Voids are defined as isolated regions of the low-density excursion set specified via density thresholds, the density thresholds being quantified by the corresponding filling factors, i.e., the fraction of the total volume in the excursion set. Our work encompasses a systematic investigation of the void volume function, the volume fraction in voids, and the fitting of voids to corresponding ellipsoids and spheres. We emphasize the relevance of the percolation threshold to the void volume statistics of the density field both in the high redshift, Gaussian random field regime, as well as in the present epoch. By using measures such as the Inverse Porosity, we characterize the quality of ellipsoidal fits to voids, finding that such fits are a poor representation of the larger voids that dominate the volume of the void excursion set.
Personal Control and the Ecology of Community Living Settings: Beyond Living-Unit Size and Type.
Stancliffe, Roger J.; Abery, Brian H.; Smith, John
2000-01-01
Personal control exercised by 74 adults from community living settings in Minnesota were evaluated. Individuals living semi-independently exercised more personal control than did residents of HCBS Waiver-funded supported living services, who had more personal control than did those living in community ICFs/MR. Personal characteristics and…
Beauty, body size and wages: Evidence from a unique data set.
Oreffice, Sonia; Quintana-Domeque, Climent
2016-09-01
We analyze how attractiveness rated at the start of the interview in the German General Social Survey is related to weight, height, and body mass index (BMI), separately by gender and accounting for interviewers' characteristics or fixed effects. We show that height, weight, and BMI all strongly contribute to male and female attractiveness when attractiveness is rated by opposite-sex interviewers, and that anthropometric characteristics are irrelevant to male interviewers when assessing male attractiveness. We also estimate whether, controlling for beauty, body size measures are related to hourly wages. We find that anthropometric attributes play a significant role in wage regressions in addition to attractiveness, showing that body size cannot be dismissed as a simple component of beauty. Our findings are robust to controlling for health status and accounting for selection into working.
Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin
2014-01-01
Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy. PMID:24939234
Multivariate asymptotic analysis of set partitions: Focus on blocks of fixed size
Guy Louchard
2017-01-01
Full Text Available Using the Saddle point method and multiseries expansions, we obtain from the exponential formula and Cauchy's integral formula, asymptotic results for the number $T(n,m,k$ of partitions of $n$ labeled objects with $m$ blocks of fixed size $k$. We analyze the central and non-central region. In the region $m=n/k-n^\\al,\\quad 1>\\al>1/2$, we analyze the dependence of $T(n,m,k$ on $\\al$. This paper fits within the framework of Analytic Combinatorics.
Computing a Finite Size Representation of the Set of Approximate Solutions of an MOP
Schuetze, Oliver; Tantar, Emilia; Talbi, El-Ghazali
2008-01-01
Recently, a framework for the approximation of the entire set of $\\epsilon$-efficient solutions (denote by $E_\\epsilon$) of a multi-objective optimization problem with stochastic search algorithms has been proposed. It was proven that such an algorithm produces -- under mild assumptions on the process to generate new candidate solutions --a sequence of archives which converges to $E_{\\epsilon}$ in the limit and in the probabilistic sense. The result, though satisfactory for most discrete MOPs, is at least from the practical viewpoint not sufficient for continuous models: in this case, the set of approximate solutions typically forms an $n$-dimensional object, where $n$ denotes the dimension of the parameter space, and thus, it may come to perfomance problems since in practise one has to cope with a finite archive. Here we focus on obtaining finite and tight approximations of $E_\\epsilon$, the latter measured by the Hausdorff distance. We propose and investigate a novel archiving strategy theoretically and emp...
A Full-size High Temperature Superconducting Coil Employed in a Wind Turbine Generator Set-up
Song, Xiaowei (Andy); Mijatovic, Nenad; Kellers, Jürgen
2016-01-01
is tested in LN2 first, and then tested in the set-up so that the magnetic environment in a real generator is reflected. The experimental results are reported, followed by a finite element simulation and a discussion on the deviation of the results. The tested and estimated Ic in LN2 are 148 A and 143 A......A full-size stationary experimental set-up, which is a pole pair segment of a 2 MW high temperature superconducting (HTS) wind turbine generator, has been built and tested under the HTS-GEN project in Denmark. The performance of the HTS coil is crucial to the set-up, and further to the development...
[Nutritional support groups at a hospital setting. Size, composition, relationships and actions].
Santana Porbén, S; Barreto Penié, J
2007-01-01
The hospital Nutricional Support Group (NSG) represents the ultimate step in the evolution of the forms of provision of nutritional and feeding care to hospitalized patients. The NSG outdoes other preceeding forms for its harmony and cohesion among its members, the multi-, inter- and transdisciplinarity, the dedication to the activity on a full time basis, and the capability to self-finance by means of the savings derived from the implementation of a nutritional policy consistent with the Good Practices of Feeding and Nutrition. It is to be expected that the inception and operation of a NSG in a hospital environment allows the realization of the benefits embedded into the Metabolic, Nutritional and Feeding Intervention Programs. Guidelines and recommendations for the definition of the size and composition of an hospital NSG are presented in this article, along with the responsabilities, functions and tasks to be assumed by its members, and a timetable for its implementation, always from the experiencies of the authors after conducting a NSG in a tertiary-care hospital in Havana (Cuba).
On sets of vectors of a finite vector space in which every subset of basis size is a basis II
Ball, Simeon
2012-01-01
This article contains a proof of the MDS conjecture for $k \\leq 2p-2$. That is, that if $S$ is a set of vectors of ${\\mathbb F}_q^k$ in which every subset of $S$ of size $k$ is a basis, where $q=p^h$, $p$ is prime and $q$ is not and $k \\leq 2p-2$, then $|S| \\leq q+1$. It also contains a short proof of the same fact for $k\\leq p$, for all $q$.
Deary Ian J
2009-04-01
Full Text Available Abstract Background Brain size is associated with cognitive ability in adulthood (correlation ~ .3, but few studies have investigated the relationship in normal ageing, particularly beyond age 75 years. With age both brain size and fluid-type intelligence decline, and regional atrophy is often suggested as causing decline in specific cognitive abilities. However, an association between brain size and intelligence may be due to the persistence of this relationship from earlier life. Methods We recruited 107 community-dwelling volunteers (29% male aged 75–81 years for cognitive testing and neuroimaging. We used principal components analysis to derived a 'general cognitive factor' (g from tests of fluid-type ability. Using semi-automated analysis, we measured whole brain volume, intracranial area (ICA (an estimate of maximal brain volume, and volume of frontal and temporal lobes, amygdalo-hippocampal complex, and ventricles. Brain atrophy was estimated by correcting WBV for ICA. Results Whole brain volume (WBV correlated with general cognitive ability (g (r = .21, P Conclusion The association between brain regions and specific cognitive abilities in community dwelling people of older age is due to the life-long association between whole brain size and general cognitive ability, rather than atrophy of specific regions. Researchers and clinicians should therefore be cautious of interpreting global or regional brain atrophy on neuroimaging as contributing to cognitive status in older age without taking into account prior mental ability and brain size.
Rodríguez-Pérez, Raquel; Vogt, Martin; Bajorath, Jürgen
2017-04-24
Support vector machine (SVM) modeling is one of the most popular machine learning approaches in chemoinformatics and drug design. The influence of training set composition and size on predictions currently is an underinvestigated issue in SVM modeling. In this study, we have derived SVM classification and ranking models for a variety of compound activity classes under systematic variation of the number of positive and negative training examples. With increasing numbers of negative training compounds, SVM classification calculations became increasingly accurate and stable. However, this was only the case if a required threshold of positive training examples was also reached. In addition, consideration of class weights and optimization of cost factors substantially aided in balancing the calculations for increasing numbers of negative training examples. Taken together, the results of our analysis have practical implications for SVM learning and the prediction of active compounds. For all compound classes under study, top recall performance and independence of compound recall of training set composition was achieved when 250-500 active and 500-1000 randomly selected inactive training instances were used. However, as long as ∼50 known active compounds were available for training, increasing numbers of 500-1000 randomly selected negative training examples significantly improved model performance and gave very similar results for different training sets.
Ehemann, N R; González-González, L V; Trites, A W
2017-03-01
Three rays opportunistically obtained near Margarita Island, Venezuela, were identified as lesser devil rays Mobula cf. hypostoma, but their disc widths were between 207 and 230 cm, which is almost double the reported maximum disc width of 120 cm for this species. These morphometric data suggest that lesser devil rays are either larger than previously recognized or that these specimens belong to an unknown sub-species of Mobula in the Caribbean Sea. Better data are needed to describe the distribution, phenotypic variation and population structure of this poorly known species.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Lek, E; Fairclough, D V; Hall, N G; Hesp, S A; Potter, I C
2012-11-01
The size and age data and patterns of growth of three abundant, reef-dwelling and protogynous labrid species (Coris auricularis, Notolabrus parilus and Ophthalmolepis lineolata) in waters off Perth at c. 32° S and in the warmer waters of the Jurien Bay Marine Park (JBMP) at c. 30° S on the lower west coast of Australia are compared. Using data for the top 10% of values and a randomization procedure, the maximum total length (L(T) ) and mass of each species and the maximum age of the first two species were estimated to be significantly greater off Perth than in the JBMP (all P 0.05). These latitudinal trends, thus, typically conform to those frequently exhibited by fish species and the predictions of the metabolic theory of ecology (MTE). While, in terms of mass, the instantaneous growth rates of each species were similar at both latitudes during early life, they were greater at the higher latitude throughout the remainder and thus much of life, which is broadly consistent with the MTE. When expressed in terms of L(T), however, instantaneous growth rates did not exhibit consistent latitudinal trends across all three species. The above trends with mass, together with those for reproductive variables, demonstrate that a greater amount of energy is directed into somatic growth and gonadal development by each of these species at the higher latitude. The consistency of the direction of the latitudinal trends for maximum body size and age and pattern of growth across all three species implies that each species is responding in a similar manner to differences between the environmental characteristics, such as temperature, at those two latitudes. The individual maximum L(T), mass and age and pattern of growth of O. lineolata at a higher and thus cooler latitude on the eastern Australian coast are consistent with the latitudinal trends exhibited by those characteristics for this species in the two western Australian localities. The implications of using mass rather than
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate
Maximum Pre-Angiogenic Tumor Size
Erickson, Amy H. Lin
2010-01-01
This material has been used twice as an out-of-class project in a mathematical modeling class, the first elective course for mathematics majors. The only prerequisites for this course were differential and integral calculus, but all students had been exposed to differential equations, and the project was assigned during discussions about solving…
S. Shahid Shaukat; Toqeer Ahmed Rao; Moazzam A. Khan
2016-01-01
...) on the eigenvalues and eigenvectors resulting from principal component analysis (PCA). For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22...
Rubin, Stephen P.; Reisenbichler, Reginald R.; Slatton, Stacey L.; Rubin, Stephen P.; Reisenbichler, Reginald R.; Wetzel, Lisa A.; Hayes, Michael C.
2012-01-01
The accuracy of a model that predicts time between fertilization and maximum alevin wet weight (MAWW) from incubation temperature was tested for steelhead Oncorhynchus mykiss from Dworshak National Fish Hatchery on the Clearwater River, Idaho. MAWW corresponds to the button-up fry stage of development. Embryos were incubated at warm (mean=11.6°C) or cold (mean=7.3°C) temperatures and time between fertilization and MAWW was measured for each temperature. Model predictions of time to MAWW were within 1% of measured time to MAWW. Mean egg weight ranged from 0.101-0.136 g among females (mean = 0.116). Time to MAWW was positively related to egg size for each temperature, but the increase in time to MAWW with increasing egg size was greater for embryos reared at the warm than at the cold temperature. We developed equations accounting for the effect of egg size on time to MAWW for each temperature, and also for the mean of those temperatures (9.3°C).
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Sørensen, Arne Hagsten; Sonnergaard, Jørn Møller; Hovgaard, Lars
2006-01-01
The aim of the present study was to investigate the effect of punch and die diameter, sample size, compression speed, and particle size on two low-pressure compression-derived parameters; the compressed density and the Walker w parameter. The excellent repeatability of the low-pressure compression method allowed small effects of variations in punch and die diameter and sample size to be demonstrated on a high significance level. Changing the compression speed, however, did not cause a significant effect in the compressed density, whereas a decrease in w was seen. The effect of particle size was studied by compressing and tapping different grades of calcium carbonate, lactose, and microcrystalline cellulose. The low-pressure compression-derived parameters were compared to tapped densities and to Compressibility Indexes obtained by tapping volumetry. Even though the relationship between particle size and the low-pressure compression-derived parameters appeared to be more complicated, a similar trend was observed. It was concluded that the low-pressure compression method provides a useful alternative to the more sample-consuming methods providing flow-related information.
Algina, James; Keselman, H. J.
2008-01-01
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
Jacquemyn, Hans; Brys, Rein; Honnay, Olivier
2009-08-23
Global circulation models predict increased climatic variability, which could increase variability in demographic rates and affect long-term population viability. In animal-pollinated species, pollination services, and thus fruit and seed set, may be highly variable among years and sites, and depend on both local environmental conditions and climatic variables. Orchid species may be particularly vulnerable to disruption of their pollination services, as most species depend on pollinators for successful fruit set and because seed germination and seedling recruitment are to some extent dependent on the amount of fruits and seeds produced. Better insights into the factors determining fruit and seed set are therefore indispensable for a better understanding of population dynamics and viability of orchid populations under changing climatic conditions. However, very few studies have investigated spatio-temporal variation in fruit set in orchids. Here, we quantified fruit production in eight populations of the orchid Orchis purpurea that does not reward pollinators and 13 populations of the rewarding Neottia (Listera) ovata during five consecutive years (2002-2006). Fruit production in large populations showed much higher stability than that in small populations and was less affected by extreme weather conditions. Our results highlight the potential vulnerability of small orchid populations to an increasingly variable climate through highly unpredictable fruit-set patterns.
Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.
Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira
2016-01-01
Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.
Burns, Matthew K.; Zaslofsky, Anne F.; Maki, Kathrin E.; Kwong, Elena
2016-01-01
Incremental rehearsal (IR) has consistently led to effective retention of newly learned material, including math facts. The number of new items taught during one intervention session, called the intervention set, could be used to individualize the intervention. The appropriate amount of information that a student can rehearse and later recall…
Large sets of genomic data are becoming available for cucumber (Cucumis sativus), yet there is no tool for whole genome genotyping. Creation of saturated genetic maps depends on development of good markers. The present cucumber genetic maps are based on several hundreds of markers. However they are ...
Aoki, Masahiko; Sato, Mariko; Hirose, Katsumi; Akimoto, Hiroyoshi; Kawaguchi, Hideo; Hatayama, Yoshiomi; Ono, Shuichi; Takai, Yoshihiro
2015-04-22
Radiation-induced rib fracture after stereotactic body radiotherapy (SBRT) for lung cancer has been recently reported. However, incidence of radiation-induced rib fracture after SBRT using moderate fraction sizes with a long-term follow-up time are not clarified. We examined incidence and risk factors of radiation-induced rib fracture after SBRT using moderate fraction sizes for the patients with peripherally located lung tumor. During 2003-2008, 41 patients with 42 lung tumors were treated with SBRT to 54-56 Gy in 9-7 fractions. The endpoint in the study was radiation-induced rib fracture detected by CT scan after the treatment. All ribs where the irradiated doses were more than 80% of prescribed dose were selected and contoured to build the dose-volume histograms (DVHs). Comparisons of the several factors obtained from the DVHs and the probabilities of rib fracture calculated by Kaplan-Meier method were performed in the study. Median follow-up time was 68 months. Among 75 contoured ribs, 23 rib fractures were observed in 34% of the patients during 16-48 months after SBRT, however, no patients complained of chest wall pain. The 4-year probabilities of rib fracture for maximum dose of ribs (Dmax) more than and less than 54 Gy were 47.7% and 12.9% (p = 0.0184), and for fraction size of 6, 7 and 8 Gy were 19.5%, 31.2% and 55.7% (p = 0.0458), respectively. Other factors, such as D2cc, mean dose of ribs, V10-55, age, sex, and planning target volume were not significantly different. The doses and fractionations used in this study resulted in no clinically significant rib fractures for this population, but that higher Dmax and dose per fraction treatments resulted in an increase in asymptomatic grade 1 rib fractures.
Hughes, William O.; McNelis, Anne M.
2010-01-01
The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.
Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos
2017-06-23
This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.
Jou, Jerwen
2014-10-01
Subjects performed Sternberg-type memory recognition tasks (Sternberg paradigm) in four experiments. Category-instance names were used as learning and testing materials. Sternberg's original experiments demonstrated a linear relation between reaction time (RT) and memory-set size (MSS). A few later studies found no relation, and other studies found a nonlinear relation (logarithmic) between the two variables. These deviations were used as evidence undermining Sternberg's serial scan theory. This study identified two confounding variables in the fixed-set procedure of the paradigm (where multiple probes are presented at test for a learned memory set) that could generate a MSS RT function that was either flat or logarithmic rather than linearly increasing. These two confounding variables were task-switching cost and repetition priming. The former factor worked against smaller memory sets and in favour of larger sets whereas the latter factor worked in the opposite way. Results demonstrated that a null or a logarithmic RT-to-MSS relation could be the artefact of the combined effects of these two variables. The Sternberg paradigm has been used widely in memory research, and a thorough understanding of the subtle methodological pitfalls is crucial. It is suggested that a varied-set procedure (where only one probe is presented at test for a learned memory set) is a more contamination-free procedure for measuring the MSS effects, and that if a fixed-set procedure is used, it is worthwhile examining the RT function of the very first trials across the MSSs, which are presumably relatively free of contamination by the subsequent trials.
Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue
2015-04-01
Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.
Burling, David; Bartram, Clive; De Villiers, Melinda; Honeyfield, Lesley [St. Marks Hospital, Intestinal Imaging Centre, London (United Kingdom); Halligan, Steve; Taylor, Stuart [University College Hospital, Specialist Radiology, Level 2 Podium, London (United Kingdom); Altman, Douglas G. [Centre for Medical Statistics, Oxford (United Kingdom); Atkin, Wendy [St Mark' s Hospital, Cancer Research UK, London (United Kingdom); Fenlon, Helen; Foley, Shane; O' Hare, Alan [Mater Misericordiae, Dublin (Ireland); Laghi, Andrea; Iannaccone, Riccardo; Mangiapane, Filipo; Ori, Sante [La Sapienza, Rome (Italy); Stoker, Jaap; Florie, Jasper; Poulus, Martin; Hulst, Victor van der [Amersterdam Medican Centre, Amsterdam (Netherlands); Frost, Roger [Salisbury District Hospital, Salisbury (United Kingdom); Dessey, Guido; Lefere, Philippe; Marrannes, Jesse [Stedelijk Ziekenhuis, Roeselare (Belgium); Gallo, Teresa; Nieddu, Giulia; Regge, Daniele; Signoretta, Saverio [Candiolo Oncologic Hospital, Turin (Italy); Kay, Clive; Lowe, Andrew; Williams-Butt, Jane [Bradford Royal Infirmary, Bradford (United Kingdom); Neri, Emmanuele; Politi, Benedetta; Vagli, Paola [University of Pisa, Pisa (Italy); Nicholson, David; Renaut, Lisa; Rudralingham, Velauthan [Hope Hospital, Salford (United Kingdom)
2006-08-15
The extent measurement error on CT colonography influences polyp categorisation according to established management guidelines is studied using twenty-eight observers of varying experience to classify polyps seen at CT colonography as either 'medium' (maximal diameter 6-9 mm) or 'large' (maximal diameter 10 mm or larger). Comparison was then made with the reference diameter obtained in each patient via colonoscopy. The Bland-Altman method was used to assess agreement between observer measurements and colonoscopy, and differences in measurement and categorisation was assessed using Kruskal-Wallis and Chi-squared test statistics respectively. Observer measurements on average underestimated the diameter of polyps when compared to the reference value, by approximately 2-3 mm, irrespective of observer experience. Ninety-five percent limits of agreement were relatively wide for all observer groups, and had sufficient span to encompass different size categories for polyps. There were 167 polyp observations and 135 (81%) were correctly categorised. Of the 32 observations that were miscategorised, 5 (16%) were overestimations and 27 (84%) were underestimations (i.e. large polyps misclassified as medium). Caution should be exercised for polyps whose colonographic diameter is below but close to the 1-cm boundary threshold in order to avoid potential miscategorisation of advanced adenomas. (orig.)
Krueger, Andrew T; Kool, Eric T
2008-03-26
We recently described the synthesis and helix assembly properties of expanded DNA (xDNA), which contains base pairs 2.4 A larger than natural DNA pairs. This designed genetic set is under study with the goals of mimicking the functions of the natural DNA-based genetic system and of developing useful research tools. Here, we study the fluorescence properties of the four expanded bases of xDNA (xA, xC, xG, xT) and evaluate how their emission varies with changes in oligomer length, composition, and hybridization. Experiments were carried out with short oligomers of xDNA nucleosides conjugated to a DNA oligonucleotide, and we investigated the effects of hybridizing these fluorescent oligomers to short complementary DNAs with varied bases opposite the xDNA bases. As monomer nucleosides, the xDNA bases absorb light in two bands: one at approximately 260 nm (similar to DNA) and one at longer wavelength ( approximately 330 nm). All are efficient violet-blue fluorophores with emission maxima at approximately 380-410 nm and quantum yields (Phifl) of 0.30-0.52. Short homo-oligomers of the xDNA bases (length 1-4 monomers) showed moderate self-quenching except xC, which showed enhancement of Phifl with increasing length. Interestingly, multimers of xA emitted at longer wavelengths (520 nm) as an apparent excimer. Hybridization of an oligonucleotide to the DNA adjacent to the xDNA bases (with the xDNA portion overhanging) resulted in no change in fluorescence. However, addition of one, two, or more DNA bases in these duplexes opposite the xDNA portion resulted in a number of significant fluorescence responses, including wavelength shifts, enhancements, or quenching. The strongest responses were the enhancement of (xG)n emission by hybridization of one or more adenines opposite them, and the quenching of (xT)n and (xC)n emission by guanines opposite. The data suggest multiple ways in which the xDNA bases, both alone and in oligomers, may be useful as tools in biophysical analysis
Eckhard, Timo; Valero, Eva M; Hernández-Andrés, Javier; Heikkinen, Ville
2014-03-01
In this work, we evaluate the conditionally positive definite logarithmic kernel in kernel-based estimation of reflectance spectra. Reflectance spectra are estimated from responses of a 12-channel multispectral imaging system. We demonstrate the performance of the logarithmic kernel in comparison with the linear and Gaussian kernel using simulated and measured camera responses for the Pantone and HKS color charts. Especially, we focus on the estimation model evaluations in case the selection of model parameters is optimized using a cross-validation technique. In experiments, it was found that the Gaussian and logarithmic kernel outperformed the linear kernel in almost all evaluation cases (training set size, response channel number) for both sets. Furthermore, the spectral and color estimation accuracies of the Gaussian and logarithmic kernel were found to be similar in several evaluation cases for real and simulated responses. However, results suggest that for a relatively small training set size, the accuracy of the logarithmic kernel can be markedly lower when compared to the Gaussian kernel. Further it was found from our data that the parameter of the logarithmic kernel could be fixed, which simplified the use of this kernel when compared with the Gaussian kernel.
Thurow, R.; Luce, C.; Isaak, D.; Buffington, J.; McKean, J.; Nagel, D.
2008-12-01
Western North American landscapes are marked by extensive natural disturbances and human perturbations. Persistence of native species in these dynamic systems requires diverse and vagile life histories and conservation strategies based on data collected at appropriate spatial and temporal scales. Most experimental forests and watersheds have focused on understanding processes at the scale of hillslopes and small headwater catchments ( 1,000 km2) using principles of aggregation has proven problematic, suggesting a need for process understanding at larger scales. At larger scales, land and water management policies are more important than impacts associated with small projects, and the spatial patterns of impacts from large-scale natural process, such as climate variability, major storms, wildfire, and insect outbreaks, become substantially more influential. Within two large (> 7,000 km2) mountainous Idaho river basins, we are developing long- term data sets to describe biological and physical processes influencing aquatic habitat, and the distribution, diversity, and persistence of fish. In the Middle Fork Salmon River, the abundance of ESA-listed, native Chinook salmon have been monitored annually for more than 55 years. Since 1995, we have supplemented these long-term dataset with annual spatial censuses of spawning distributions and collection of samples for landscape-level genetic analyses. Biological data are being integrated with basin-scale predictions of salmon spawning habitat distributions, estimates of sediment motion and bedload transport, basin-scale patterns of spatial autocorrelation in water temperatures, and continuous remote sensing of selected stream channels via airborne laser altimetry. In the Boise River Basin, we have examined the effects of historic wildfires and contemporary climate change on habitat distributions for ESA-listed bull trout and other native salmonids. Comparisons between scales of debris-flow run out paths and fish habitat
Petito Boyce, Catherine; Sax, Sonja N; Cohen, Joel M
2017-08-01
Inhalation plays an important role in exposures to lead in airborne particulate matter in occupational settings, and particle size determines where and how much of airborne lead is deposited in the respiratory tract and how much is subsequently absorbed into the body. Although some occupational airborne lead particle size data have been published, limited information is available reflecting current workplace conditions in the U.S. To address this data gap, the Battery Council International (BCI) conducted workplace monitoring studies at nine lead acid battery manufacturing facilities (BMFs) and five secondary smelter facilities (SSFs) across the U.S. This article presents the results of the BCI studies focusing on the particle size distributions calculated from Personal Marple Impactor sampling data and particle deposition estimates in each of the three major respiratory tract regions derived using the Multiple-Path Particle Dosimetry model. The BCI data showed the presence of predominantly larger-sized particles in the work environments evaluated, with average mass median aerodynamic diameters (MMADs) ranging from 21-32 µm for the three BMF job categories and from 15-25 µm for the five SSF job categories tested. The BCI data also indicated that the percentage of lead mass measured at the sampled facilities in the submicron range (i.e., size range associated with enhanced absorption of associated lead) was generally small. The estimated average percentages of lead mass in the submicron range for the tested job categories ranged from 0.8-3.3% at the BMFs and from 0.44-6.1% at the SSFs. Variability was observed in the particle size distributions across job categories and facilities, and sensitivity analyses were conducted to explore this variability. The BCI results were compared with results reported in the scientific literature. Screening-level analyses were also conducted to explore the overall degree of lead absorption potentially associated with the observed
Zaninetti, L
2015-01-01
Recently it could be shown ( that the impact crater size-frequency distribution of Pluto (based on an analysis of first images obtained by the recent New Horizons flyby) follows a power law alpha = 2.4926 in the interval of diameter (D) values ranging from 3.75 km to the largest determined value of 37.77 km. A reanalysis of this data set revealed that the whole crater SFD (i.e., with values in the interval of 1.2-37.7 km) can be described by a truncated Pareto distribution.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Shin, Yun-Kyoung; Proctor, Robert W
2008-11-01
Previous studies have paired a visual-manual Task 1 with an auditory-vocal Task 2 to evaluate whether the psychological refractory period (PRP) effect is eliminated with two ideomotor-compatible tasks (for which stimuli resemble the response feedback). The present study varied the number of stimulus-response alternatives for Task 1 in three experiments to determine whether set-size and PRP effects were absent, as would be expected if the tasks bypass limited-capacity response-selection processes. In Experiments 1 and 2, the visual-manual task was used as Task 1, with lever-movement and keypress responses, respectively. In Experiment 3, the auditory-vocal task was used as Task 1 and the visual-manual task as Task 2. A significant lengthening of reaction time for 4 vs. 2 alternatives was found for the visual-manual Task 1 and the Task 2 PRP effect in Experiments 1 and 2, suggesting that the visual-manual task is not ideomotor compatible. Neither effect of set size was significant for the auditory-vocal Task 1 in Experiment 3, but there was still a Task 2 PRP effect. Our results imply that neither version of the visual-manual task is ideomotor compatible; other considerations suggest that the auditory-vocal task may also still require response selection.
Cheng-Wu Chen
2008-11-01
Full Text Available The general approach to modeling binary data for the purpose of estimating the propagation of an internal solitary wave (ISW is based on the maximum likelihood estimate (MLE method. In cases where the number of observations in the data is small, any inferences made based on the asymptotic distribution of changes in the deviance may be unreliable for binary data (the model's lack of fit is described in terms of a quantity known as the deviance. The deviance for the binary data is given by D. Collett (2003. may be unreliable for binary data. Logistic regression shows that the P-values for the likelihood ratio test and the score test are both <0.05. However, the null hypothesis is not rejected in the Wald test. The seeming discrepancies in P-values obtained between the Wald test and the other two tests are a sign that the large-sample approximation is not stable. We find that the parameters and the odds ratio estimates obtained via conditional exact logistic regression are different from those obtained via unconditional asymptotic logistic regression. Using exact results is a good idea when the sample size is small and the approximate P-values are <0.10. Thus in this study exact analysis is more appropriate.
Zaninetti L.
2016-01-01
Full Text Available Recently it could be shown (Scholkmann, Prog. in Phys. , 2016, v. 12(1, 26-29 that the impact crater size-frequency distribution of Pluto (based on an analysis of first images obtained by the recent New Horizons’ flyby follows a power law (α =2.4926±0.3309 in the interval of diameter ( D values ranging from 3.75±1.14 km to the largest deter- mined value of 37.77 km. A reanalysis of this data set revealed that the whole crater SFD (i.e., with values in the interval of 1.2–37.7 km can be described by a truncated Pareto distribution.
Diego Lirman
Full Text Available BACKGROUND: The drastic decline in the abundance of Caribbean acroporid corals (Acropora cervicornis, A. palmata has prompted the listing of this genus as threatened as well as the development of a regional propagation and restoration program. Using in situ underwater nurseries, we documented the influence of coral genotype and symbiont identity, colony size, and propagation method on the growth and branching patterns of staghorn corals in Florida and the Dominican Republic. METHODOLOGY/PRINCIPAL FINDINGS: Individual tracking of> 1700 nursery-grown staghorn fragments and colonies from 37 distinct genotypes (identified using microsatellites in Florida and the Dominican Republic revealed a significant positive relationship between size and growth, but a decreasing rate of productivity with increasing size. Pruning vigor (enhanced growth after fragmentation was documented even in colonies that lost 95% of their coral tissue/skeleton, indicating that high productivity can be maintained within nurseries by sequentially fragmenting corals. A significant effect of coral genotype was documented for corals grown in a common-garden setting, with fast-growing genotypes growing up to an order of magnitude faster than slow-growing genotypes. Algal-symbiont identity established using qPCR techniques showed that clade A (likely Symbiodinium A3 was the dominant symbiont type for all coral genotypes, except for one coral genotype in the DR and two in Florida that were dominated by clade C, with A- and C-dominated genotypes having similar growth rates. CONCLUSION/SIGNIFICANCE: The threatened Caribbean staghorn coral is capable of extremely fast growth, with annual productivity rates exceeding 5 cm of new coral produced for every cm of existing coral. This species benefits from high fragment survivorship coupled by the pruning vigor experienced by the parent colonies after fragmentation. These life-history characteristics make A. cervicornis a successful candidate
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Kenyon, Scott J. [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu [Department of Physics, University of Utah, 201 JFB, Salt Lake City, UT 84112 (United States)
2012-03-15
We investigate whether coagulation models of planet formation can explain the observed size distributions of trans-Neptunian objects (TNOs). Analyzing published and new calculations, we demonstrate robust relations between the size of the largest object and the slope of the size distribution for sizes 0.1 km and larger. These relations yield clear, testable predictions for TNOs and other icy objects throughout the solar system. Applying our results to existing observations, we show that a broad range of initial disk masses, planetesimal sizes, and fragmentation parameters can explain the data. Adding dynamical constraints on the initial semimajor axis of 'hot' Kuiper Belt objects along with probable TNO formation times of 10-700 Myr restricts the viable models to those with a massive disk composed of relatively small (1-10 km) planetesimals.
Minimizing Maximum Response Time and Delay Factor in Broadcast Scheduling
Chekuri, Chandra; Moseley, Benjamin
2009-01-01
We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\\eps)-speed for unit-sized pages and with (2+\\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\\eps)-speed and (4+\\eps)-speed respectively. In addition we show that the algori...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Cai, Chen-Bo; Xu, Lu; Han, Qing-Juan; Wu, Hai-Long; Nie, Jin-Fang; Fu, Hai-Yan; Yu, Ru-Qin
2010-05-15
The paper focuses on solving a common and important problem of NIR quantitative analysis in multi-component systems: how to significantly reduce the size of the calibration set while not impairing the predictive precision. To cope with the problem orthogonal discrete wavelet packet transform (WPT), the least correlation design and correlation coefficient test (r-test) have been combined together. As three examples, a two-component carbon tetrachloride system with 21 calibration samples, a two-component aqueous system with 21 calibration samples, and a two-component aqueous system with 41 calibration samples have been treated with the proposed strategy, respectively. In comparison with some previous methods based on much more calibration samples, the results out of the strategy showed that the predictive ability was not obviously decreased for the first system while being clearly strengthened for the second one, and the predictive precision out of the third one was even satisfactory enough for most cases of quantitative analysis. In addition, all important factors and parameters related to our strategy are discussed in detail.
Predecessor queries in dynamic integer sets
Brodal, Gerth Stølting
1997-01-01
We consider the problem of maintaining a set of n integers in the range 0.2w–1 under the operations of insertion, deletion, predecessor queries, minimum queries and maximum queries on a unit cost RAM with word size w bits. Let f (n) be an arbitrary nondecreasing smooth function satisfying n...
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Kuzuha, Yasuhisa; Sivapalan, Murugesu; Tomosugi, Kunio; Kishii, Tokuo; Komatsu, Yosuke
2006-04-01
Eagleson's classical regional flood frequency model is investigated. Our intention was not to improve the model, but to reveal previously unidentified important and dominant hydrological processes in it. The change of the coefficient of variation (CV) of annual maximum discharge with catchment area can be viewed as representing the spatial variance of floods in a homogeneous region. Several researchers have reported that the CV decreases as the catchment area increases, at least for large areas. On the other hand, Eagleson's classical studies have been known as pioneer efforts that combine the concept of similarity analysis (scaling) with the derived flood frequency approach. As we have shown, the classical model can reproduce the empirical relationship between the mean annual maximum discharge and catchment area, but it cannot reproduce the empirical decreasing CV-catchment area curve. Therefore, we postulate that previously unidentified hydrological processes would be revealed if the classical model were improved to reproduce the decreasing of CV with catchment area. First, we attempted to improve the classical model by introducing a channel network, but this was ineffective. However, the classical model was improved by introducing a two-parameter gamma distribution for rainfall intensity. What is important is not the gamma distribution itself, but those characteristics of spatial variability of rainfall intensity whose CV decreases with increasing catchment area. Introducing the variability of rainfall intensity into the hydrological simulations explains how the CV of rainfall intensity decreases with increasing catchment area. It is difficult to reflect the rainfall-runoff processes in the model while neglecting the characteristics of rainfall intensity from the viewpoint of annual flood discharge variances.
任永泰; 李丽
2011-01-01
利用基于极大熵准则赋权和基于实数加速遗传算法的投影寻踪方法相结合的组合附权法确定了各预警指标的权重；采用层次分析法计算水资源可持续利用复合系统中各子系统所占权重；利用综合评价模型计算出哈尔滨市水资源可持续发展指数；最终得到哈尔滨市水资源可持续利用预警结果.%The weights of each warning index are determined by combination enables law which is based on the maximum entropy criterion empowerment and projection pursuit method of real accelerating genetic algorithm; Using analytic hierarchy process to calculate the weights of each subsystem in composite system of water resources sustainable utilization; Sustainable development index of Harbin water resources is calculated by using comprehensive evaluation model; Warning results of Harbin water resources sustainable utilization are got eventually.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Sets of unit vectors with small pairwise sums
Swanepoel, Konrad J
2010-01-01
We study the sizes of delta-additive sets of unit vectors in a d-dimensional normed space: the sum of any two vectors has norm at most delta. One-additive sets originate in finding upper bounds of vertex degrees of Steiner Minimum Trees in finite dimensional smooth normed spaces (Z. F\\"uredi, J. C. Lagarias, F. Morgan, 1991). We show that the maximum size of a delta-additive set over all normed spaces of dimension d grows exponentially in d for fixed delta>2/3, stays bounded for delta<2/3, and grows linearly at the threshold delta=2/3. Furthermore, the maximum size of a 2/3-additive set in d-dimensional normed space has the sharp upper bound of d, with the single exception of spaces isometric to three-dimensional l^1 space, where there exists a 2/3-additive set of four unit vectors.
Yu-Erh Huang (Dept. of Nuclear Medicine, Chang Gung Memorial Hospital-Kaohsiung Medical Center, Kaohsiung, Taiwan (China)); Chih-Feng Chen (Dept. of Radiology, Chang Gung Memorial Hospital-Kaohsiung Medical Center, Kaohsiung, Taiwan (China)); Yu-Jie Huang (Dept. of Radiation Oncology, Chang Gung Memorial Hospital-Kaohsiung Medical Center, Kaohsiung, Taiwan (China)); Konda, Sheela D.; Appelbaum, Daniel E.; Yonglin Pu (Dept. of Radiology, Univ. of Chicago, Chicago, IL (United States)), e-mail: ypu@radiology.bsd.uchicago.edu
2010-09-15
Background: 18F-fluoro-2-deoxyglucose positron emission tomography (18F-FDG PET) imaging has been shown to be an accurate method for diagnosing pulmonary lesions, and the standardized uptake value (SUV) has been shown to be useful in differentiating benign from malignant lesions. Purpose: To survey the interobserver variability of SUVmax and SUVmean measurements on 18F-FDG PET/CT scans and compare them with tumor size measurements on diagnostic CT scans in the same group of patients with focal pulmonary lesions. Material and Methods: Forty-three pulmonary nodules were measured on both 18F-FDG PET/CT and diagnostic chest CT examinations. Four independent readers measured the SUVmax and SUVmean of the 18F-FDG PET images, and the unidimensional nodule size of the diagnostic CT scans (UDCT) in all nodules. The region of interest (ROI) for the SUV measurements was drawn manually around each tumor on all consecutive slices that contained the nodule. The interobserver reliability and variability, represented by the intraclass correlation coefficient (ICC) and coefficient of variation (COV), respectively, were compared among the three parameters. The correlation between the SUVmax and SUVmean was also analyzed. Results: There was 100% agreement in the SUVmax measurements among the 4 readers in the 43 pulmonary tumors. The ICCs for the SUVmax, SUVmean, and UDCT by the four readers were 1.00, 0.97, and 0.97, respectively. The root-mean-square values of the COVs for the SUVmax, SUVmean, and UDCT by the four readers were 0%, 13.56%, and 11.03%, respectively. There was a high correlation observed between the SUVmax and SUVmean (Pearson's r=0.958; P <0.01). Conclusion: This study has shown that the SUVmax of lung nodules can be calculated without any interobserver variation. These findings indicate that SUVmax is a more valuable parameter than the SUVmean or UDCT for the evaluation of therapeutic effects of chemotherapy or radiation therapy on serial studies
Better with More Choices? Impact of Choice Set Size on Variety Seeking%选择多多益善?--选择集大小对消费者多样化寻求的影响
刘蕾; 郑毓煌; 陈瑞
2015-01-01
Firms today offer more diverse products to induce consumption. Does the variety of choices always enhance consumers’ choices of more varieties? Intuitively, the larger the choice set size, the more varieties consumers will choose. However, the present research argued and found that there was an inverted-U relationship between choice set size and variety seeking. Specially, as choice set size increased, consumers’ variety seeking first increased and then decreased. When choice set was too large, consumers were more likely to use heuristic processing of information, which led to the decrease of variety seeking. Studies 1A and 1B first showed the inverted-U relationship between choice set size and variety seeking with different products. Both experiments used a single factor between-subject design with three choice set size groups: a small choice set (6 flavors), a moderate choice set (12 flavors), and a large choice set (30 flavors) of yogurt (Study 1A) or ice cream (Study 1B). Subjects were randomly assigned to one of the three choice set size groups. Results showed that, for both experiments, consumers’ variety seeking first increased as choice set size increased from small to moderate, but consumers’ variety seeking then decreased when choice set size increased from moderate to large, supporting the inverted-U relationship between choice set size and variety-seeking (H1). Studies 2A and 2B aimed to test the proposed underlying mechanism, namely the heuristic information processing, by examining the moderation effect of individuals’ need for cognition (NFC). Study 2A used a 3 choice set size (6 vs. 12 vs. 30) × 2 NFC (low vs. high) between-subject design and showed that NFC moderated the inverted-U relationship. Specifically, the inverted-U relationship was only observed for low-NFC participants, but not for high-NFC participants (H2). Furthermore, in the large choice set condition (30 flavors), participants with low NFC showed more variety seeking than
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Boundary condition effects on maximum groundwater withdrawal in coastal aquifers.
Lu, Chunhui; Chen, Yiming; Luo, Jian
2012-01-01
Prevention of sea water intrusion in coastal aquifers subject to groundwater withdrawal requires optimization of well pumping rates to maximize the water supply while avoiding sea water intrusion. Boundary conditions and the aquifer domain size have significant influences on simulating flow and concentration fields and estimating maximum pumping rates. In this study, an analytical solution is derived based on the potential-flow theory for evaluating maximum groundwater pumping rates in a domain with a constant hydraulic head landward boundary. An empirical correction factor, which was introduced by Pool and Carrera (2011) to account for mixing in the case with a constant recharge rate boundary condition, is found also applicable for the case with a constant hydraulic head boundary condition, and therefore greatly improves the usefulness of the sharp-interface analytical solution. Comparing with the solution for a constant recharge rate boundary, we find that a constant hydraulic head boundary often yields larger estimations of the maximum pumping rate and when the domain size is five times greater than the distance between the well and the coastline, the effect of setting different landward boundary conditions becomes insignificant with a relative difference between two solutions less than 2.5%. These findings can serve as a preliminary guidance for conducting numerical simulations and designing tank-scale laboratory experiments for studying groundwater withdrawal problems in coastal aquifers with minimized boundary condition effects.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Rispin, Karen; Wee, Joy
2014-01-01
This comparative study of two similar wheelchairs designed for less-resourced settings provides feedback to manufacturers, informing ongoing improvement in wheelchair design. It also provides practical familiarity to clinicians in countries where these chairs are available, in their selection of prescribed wheelchairs. In Kenya, 24 subjects completed 3 timed skills and assessments of energy cost on 2 surfaces in each of 2 wheelchairs: the Regency pediatric chair and a pediatric wheelchair manufactured by the Association of the Physically Disabled of Kenya (APDK). Both wheelchairs are designed for and distributed in less-resourced settings. The Regency chair significantly outperformed the APDK chair in one of the energy cost assessments on both surfaces and in one of three timed skills tests.
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Günter, Simon; Bunke, Horst
2005-03-01
Unconstrained handwritten text recognition is one of the most difficult problems in the field of pattern recognition. Recently, a number of classifier creation and combination methods, known as ensemble methods, have been proposed in the field of machine learning. They have shown improved recognition performance over single classifiers. In this paper, we examine the influence of the vocabulary size, the number of training samples, and the number of classifiers on the performance of three ensemble methods in the context of cursive handwriting recognition. All experiments were conducted using an off-line handwritten word recognizer based on hidden Markov models (HMMs).
Baranowska-Łączkowska, Angelika; Bartkowiak, Wojciech; Góra, Robert W; Pawłowski, Filip; Zaleśny, Robert
2013-04-05
Static longitudinal electric dipole (hyper)polarizabilities are calculated for six medium-sized π-conjugated organic molecules using recently developed LPol-n basis set family to assess their performance. Dunning's correlation-consistent basis sets of triple-ζ quality combined with MP2 method and supported by CCSD(T)/aug-cc-pVDZ results are used to obtain the reference values of analyzed properties. The same reference is used to analyze (hyper)polarizabilities predicted by selected exchange-correlation functionals, particularly those asymptotically corrected.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Generalized degeneracy, dynamic monopolies and maximum degenerate subgraphs
Zaker, Manouchehr
2012-01-01
A graph $G$ is said to be a $k$-degenerate graph if any subgraph of $G$ contains a vertex of degree at most $k$. Let $\\kappa$ be any non-negative function on the vertex set of $G$. We first define a $\\kappa$-degenerate graph. Next we give an efficient algorithm to determine whether a graph is $\\kappa$-degenerate. We revisit the concept of dynamic monopolies in graphs. The latter notion is used in formulation and analysis of spread of influence such as disease or opinion in social networks. We consider dynamic monopolies with (not necessarily positive) but integral threshold assignments. We obtain a sufficient and necessary relationship between dynamic monopolies and generalized degeneracy. As applications of the previous results we consider the problem of determining the maximum size of $\\kappa$-degenerate (or $k$-degenerate) induced subgraphs in any graph. We obtain some upper and lower bounds for the maximum size of any $\\kappa$-degenerate induced subgraph in general and regular graphs. All of our bounds ar...
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Sandhu, Gagangeet; Ranade, Aditi; Mankal, Pavan; Herlitz, Leal C; Jones, James; Cortell, Stanley
2011-02-01
Acute kidney injury in HIV patients is primarily related to HIV-mediated viral or immunological disease or to treatment-related toxicity (tenofovir). Neoplasms are a rare cause of non-obstructive acute kidney injury, primarily because when they occur, they manifest as discrete masses and not as diffuse infiltration of the renal parenchyma. Diffusely infiltrating tumors include carcinoma of the renal pelvis invading the renal parenchyma, renal lymphoma, squamous cell carcinoma (from lung) metastasizing to the kidney and infiltrating sarcomatous type of renal cell carcinoma. To be classified as a true case of renal lymphoma, the tumor should have escaped detection on routine imaging preceding biopsy, and lymphoma-associated renal failure/nephrotic proteinuria should have given rise to the indication for kidney biopsy. We present here a case of an acute kidney injury due to renal lymphoma in a patient with acquired immune deficiency syndrome that manifested clinically as bland urine sediment, minimal proteinuria and normal-sized kidneys. Chemotherapy resulted in complete reversal of acute kidney injury.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Body size distribution of the dinosaurs.
Eoin J O'Gorman
Full Text Available The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.
Body size distribution of the dinosaurs.
O'Gorman, Eoin J; Hone, David W E
2012-01-01
The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Maximum Deformation Ratio of Droplets of Water-Based Paint Impact on a Flat Surface
Weiwei Xu
2017-06-01
Full Text Available In this research, the maximum deformation ratio of water-based paint droplets impacting and spreading onto a flat solid surface was investigated numerically based on the Navier–Stokes equation coupled with the level set method. The effects of droplet size, impact velocity, and equilibrium contact angle are taken into account. The maximum deformation ratio increases as droplet size and impact velocity increase, and can scale as We1/4, where We is the Weber number, for the case of the effect of the droplet size. Finally, the effect of equilibrium contact angle is investigated, and the result shows that spreading radius decreases with the increase in equilibrium contact angle, whereas the height increases. When the dimensionless time t* < 0.3, there is a linear relationship between the dimensionless spreading radius and the dimensionless time to the 1/2 power. For the case of 80° ≤ θe ≤ 120°, where θe is the equilibrium contact angle, the simulation result of the maximum deformation ratio follows the fitting result. The research on the maximum deformation ratio of water-based paint is useful for water-based paint applications in the automobile industry, as well as in the biomedical industry and the real estate industry. Please check all the part in the whole passage that highlighted in blue whether retains meaning before.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
A maximum in the strength of nanocrystalline copper
Schiøtz, Jakob; Jacobsen, Karsten Wedel
2003-01-01
We used molecular dynamics simulations with system sizes up to 100 million atoms to simulate plastic deformation of nanocrystalline copper. By varying the grain size between 5 and 50 nanometers, we show that the flow stress and thus the strength exhibit a maximum at a grain size of 10 to 15...... nanometers. This maximum is because of a shift in the microscopic deformation mechanism from dislocation-mediated plasticity in the coarse-grained material to grain boundary sliding in the nanocrystalline region. The simulations allow us to observe the mechanisms behind the grain-size dependence...
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Maximum power analysis of photovoltaic module in Ramadi city
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Maximum power analysis of photovoltaic module in Ramadi city
Majid Shahatha Salim, Jassim Mohammed Najim, Salih Mohammed Salih
2013-01-01
Full Text Available Performance of photovoltaic (PV module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Size. 51.1545 Section 51.1545 Agriculture Regulations... Standards for Grades of Potatoes 1 Size § 51.1545 Size. (a) The minimum size, or minimum and maximum sizes..., or in accordance with one of the size designations in Table I or Table II: Provided, That sizes so...
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Investigation of Maximum Blade Loading Capability of Lift-Offset Rotors
Yeo, Hyeonsoo; Johnson, Wayne
2013-01-01
Maximum blade loading capability of a coaxial, lift-offset rotor is investigated using a rotorcraft configuration designed in the context of short-haul, medium-size civil and military missions. The aircraft was sized for a 6600-lb payload and a range of 300 nm. The rotor planform and twist were optimized for hover and cruise performance. For the present rotor performance calculations, the collective pitch angle is progressively increased up to and through stall with the shaft angle set to zero. The effects of lift offset on rotor lift, power, controls, and blade airloads and structural loads are examined. The maximum lift capability of the coaxial rotor increases as lift offset increases and extends well beyond the McHugh lift boundary as the lift potential of the advancing blades are fully realized. A parametric study is conducted to examine the differences between the present coaxial rotor and the McHugh rotor in terms of maximum lift capabilities and to identify important design parameters that define the maximum lift capability of the rotor. The effects of lift offset on rotor blade airloads and structural loads are also investigated. Flap bending moment increases substantially as lift offset increases to carry the hub roll moment even at low collective values. The magnitude of flap bending moment is dictated by the lift-offset value (hub roll moment) but is less sensitive to collective and speed.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
The Statistical Mechanics of Random Set Packing and a Generalization of the Karp-Sipser Algorithm
C. Lucibello
2014-01-01
Full Text Available We analyse the asymptotic behaviour of random instances of the maximum set packing (MSP optimization problem, also known as maximum matching or maximum strong independent set on hypergraphs. We give an analytic prediction of the MSPs size using the 1RSB cavity method from statistical mechanics of disordered systems. We also propose a heuristic algorithm, a generalization of the celebrated Karp-Sipser one, which allows us to rigorously prove that the replica symmetric cavity method prediction is exact for certain problem ensembles and breaks down when a core survives the leaf removal process. The e-phenomena threshold discovered by Karp and Sipser, marking the onset of core emergence and of replica symmetry breaking, is elegantly generalized to Cs=e/(d-1 for one of the ensembles considered, where d is the size of the sets.
Fixed-parameter tractability of the maximum agreement supertree problem.
Guillemot, Sylvain; Berry, Vincent
2010-01-01
Given a set L of labels and a collection of rooted trees whose leaves are bijectively labeled by some elements of L, the Maximum Agreement Supertree (SMAST) problem is given as follows: find a tree T on a largest label set L(') is included in L that homeomorphically contains every input tree restricted to L('). The problem has phylogenetic applications to infer supertrees and perform tree congruence analyses. In this paper, we focus on the parameterized complexity of this NP-hard problem, considering different combinations of parameters as well as particular cases. We show that SMAST on k rooted binary trees on a label set of size n can be solved in O((8n)k) time, which is an improvement with respect to the previously known O(n3k2) time algorithm. In this case, we also give an O((2k)pkn2) time algorithm, where p is an upper bound on the number of leaves of L missing in a SMAST solution. This shows that SMAST can be solved efficiently when the input trees are mostly congruent. Then, for the particular case where any triple of leaves is contained in at least one input tree, we give O(4pn3) and O(3:12p + n4) time algorithms, obtaining the first fixed-parameter tractable algorithms on a single parameter for this problem. We also obtain intractability results for several combinations of parameters, thus indicating that it is unlikely that fixed-parameter tractable algorithms can be found in these particular cases.
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Weiss, I.
2007-01-01
The thesis introduces the new concept of dendroidal set. Dendroidal sets are a generalization of simplicial sets that are particularly suited to the study of operads in the context of homotopy theory. The relation between operads and dendroidal sets is established via the dendroidal nerve functor wh
A maximum entropy approach to separating noise from signal in bimodal affiliation networks
Dianati, Navid
2016-01-01
In practice, many empirical networks, including co-authorship and collocation networks are unimodal projections of a bipartite data structure where one layer represents entities, the second layer consists of a number of sets representing affiliations, attributes, groups, etc., and an inter-layer link indicates membership of an entity in a set. The edge weight in the unimodal projection, which we refer to as a co-occurrence network, counts the number of sets to which both end-nodes are linked. Interpreting such dense networks requires statistical analysis that takes into account the bipartite structure of the underlying data. Here we develop a statistical significance metric for such networks based on a maximum entropy null model which preserves both the frequency sequence of the individuals/entities and the size sequence of the sets. Solving the maximum entropy problem is reduced to solving a system of nonlinear equations for which fast algorithms exist, thus eliminating the need for expensive Monte-Carlo sam...
Needs to Update Probable Maximum Precipitation for Critical Infrastructure
Pathak, C. S.; England, J. F.
2015-12-01
Probable Maximum Precipitation (PMP) is theoretically the greatest depth of precipitation for a given duration that is physically possible over a given size storm area at a particular geographical location at a certain time of the year. It is used to develop inflow flood hydrographs, known as Probable Maximum Flood (PMF), as design standard for high-risk flood-hazard structures, such as dams and nuclear power plants. PMP estimation methodology was developed in the 1930s and 40s when many dams were constructed in the US. The procedures to estimate PMP were later standardized by the World Meteorological Organization (WMO) in 1973 and revised in 1986.In the US, PMP estimates were published in a series of Hydrometeorological Reports (e.g., HMR55A, HMR57, and HMR58/59) by the National Weather Service since 1950s. In these reports, storm data up to 1980s were used to establish the current PMP estimates. Since that time, we have acquired additional meteorological data for 30 to 40 years, including newly available radar and satellite based precipitation data. These data sets are expected to have improved data quality and availability in both time and space. In addition, significant numbers of extreme storms have occurred and selected numbers of these events were even close to or exceeding the current PMP estimates, in some cases. In the last 50 years, climate science has progressed and scientists have better and improved understanding of atmospheric physics of extreme storms. However, applied research in estimation of PMP has been lagging behind. Alternative methods, such as atmospheric numerical modeling, should be investigated for estimating PMP and associated uncertainties. It would be highly desirable if regional atmospheric numerical models could be utilized in the estimation of PMP and their uncertainties, in addition to methods used to originally develop PMP index maps in the existing hydrometeorological reports.
Legesse, Mengistu; Ameni, Gobena; Mamo, Gezahegne; Medhin, Girmay; Bjune, Gunnar; Abebe, Fekadu
2012-02-01
There is growing evidence showing the potential of T-cell-based gamma interferon (IFN-γ) release assays (IGRAs) for predicting the risk of progression of Mycobacterium tuberculosis (Mtb) infection, though there is little information from tuberculosis (TB)-endemic settings. In this study, we assessed the association between the level of IFN-γ produced by T cells in response to Mtb-specific antigens and the size of skin test indurations in 505 adult individuals who were screened for latent tuberculosis infection (LTBI) using the QuantiFERON-TB Gold In Tube (QFTGIT) assay and tuberculin skin test (TST). There was a strong positive correlation between the level of IFN-γ induced by the specific antigens and the diameter of the skin indurations (Spearman's rho = 0.6, P skin test indurations was significantly associated with the mean level of IFN-γ [coefficient, 0.65; 95% confidence interval (CI), 0.47 to 0.82, P skin test indurations ≥ 10 mm were 6.82 times more likely than individuals who had skin test indurations < 10 mm to have high levels of IFN-γ (i.e. positive QFTGIT result) (adjusted odd ratio = 6.82; 95% CI, 3.67 to 12.69, P < 0.001). In conclusion, the results of this study could provide indirect evidence for the prognostic use of the QFTGIT assay for progression of Mtb infection, though prospective follow-up studies are needed to provide direct evidence.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Size. 51.3413 Section 51.3413 Agriculture Regulations... Standards for Grades of Potatoes for Processing 1 § 51.3413 Size. (a) The minimum size, maximum size or range in size may be specified in connection with the grade in terms of diameter or weight. (b) Diameter...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Size. 51.344 Section 51.344 Agriculture Regulations of... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range of sizes shall be determined as agreed upon by buyer and seller. (b) Unless otherwise specified, the...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Single- vs. multiple-set strength training in women.
Schlumberger, A; Stec, J; Schmidtbleicher, D
2001-08-01
The aim of this study was to compare the effects of single-set and multiple-set strength training in women. Twenty-seven women (aged 20-40 years) with basic experience in strength training were randomly allocated to either a single-set group (n = 9), a 3-set group (n = 9), or a nontraining control group (n = 9). Both training groups underwent a whole-body strengthening program, exercising 2 days a week for 6 weeks. Exercises included bilateral leg extension, bilateral leg curl, abdominal crunch, seated hip adduction/abduction, seated bench press, and lateral pull-down. The single-set group's program consisted of only 1 set of 6-9 repetitions until failure, whereas the multiple-set group trained with 3 sets of 6-9 repetitions until failure (rest interval between sets, 2 minutes). Two times before and 3 days after termination of the training program, subjects were tested for their 1 repetition maximum strength on the bilateral leg extension and the seated bench press machine. Data were analyzed using a repeated-measures analysis of variance, Scheffé tests, t-tests, and calculation of effect sizes. Both training groups made significant strength improvements in leg extension (multiple-set group, 15%; single-set group, 6%; p 0.05). However, in the seated bench press only the 3-set group showed a significant increase in maximal strength (10%). Calculation of effect sizes and percentage gains revealed higher strength gains in the multiple-set group. No significant differences were found in the control group. These findings suggest superior strength gains occurred following 3-set strength training compared with single-set strength training in women with basic experience in resistance training.
PARTITIONING A GRAPH INTO MONOPOLY SETS
AHMED MOHAMMED NAJI
2017-06-01
Full Text Available In a graph G = (V, E, a subset M of V (G is said to be a monopoly set of G if every vertex v ∈ V - M has, at least, d(v/ 2 neighbors in M. The monopoly size of G, denoted by mo(G, is the minimum cardinality of a monopoly set. In this paper, we study the problem of partitioning V (G into monopoly sets. An M-partition of a graph G is the partition of V (G into k disjoint monopoly sets. The monatic number of G, denoted by μ(G, is the maximum number of sets in M-partition of G. It is shown that 2 ≤ μ(G ≤ 3 for every graph G without isolated vertices. The properties of each monopoly partite set of G are presented. Moreover, the properties of all graphs G having μ(G = 3, are presented. It is shown that every graph G having μ(G = 3 is Eulerian and have χ (G ≤ 3. Finally, it is shown that for every integer k which is different from {1, 2, 4}, there exists a graph G of order n = k having μ(G = 3.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Marshall, Wallace F.
2015-01-01
All of the same conceptual questions about size in organisms apply equally at the level of single cells. What determines the size, not only of the whole cell, but of all of its parts? What ensures that subcellular components are properly proportioned relative to the whole cell? How does alteration in organelle size affect biochemical function? Answering such fundamental questions requires us to understand how the size of individual organelles and other cellular structures is determined. Knowledge of organelle biogenesis and dynamics has advanced rapidly in recent years. Does this knowledge give us enough information to formulate reasonable models for organelle size control, or are we still missing something? PMID:25957302
Rakesh R. Pathak
2012-02-01
Full Text Available Based on the law of large numbers which is derived from probability theory, we tend to increase the sample size to the maximum. Central limit theorem is another inference from the same probability theory which approves largest possible number as sample size for better validity of measuring central tendencies like mean and median. Sometimes increase in sample-size turns only into negligible betterment or there is no increase at all in statistical relevance due to strong dependence or systematic error. If we can afford a little larger sample, statistically power of 0.90 being taken as acceptable with medium Cohen's d (<0.5 and for that we can take a sample size of 175 very safely and considering problem of attrition 200 samples would suffice. [Int J Basic Clin Pharmacol 2012; 1(1.000: 43-44
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
33 CFR 401.3 - Maximum vessel dimensions.
2010-07-01
..., and having dimensions that do not exceed the limits set out in the block diagram in appendix I of this... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Maximum vessel dimensions. 401.3 Section 401.3 Navigation and Navigable Waters SAINT LAWRENCE SEAWAY DEVELOPMENT CORPORATION, DEPARTMENT...
A strong test of the maximum entropy theory of ecology.
Xiao, Xiao; McGlinn, Daniel J; White, Ethan P
2015-03-01
The maximum entropy theory of ecology (METE) is a unified theory of biodiversity that predicts a large number of macroecological patterns using information on only species richness, total abundance, and total metabolic rate of the community. We evaluated four major predictions of METE simultaneously at an unprecedented scale using data from 60 globally distributed forest communities including more than 300,000 individuals and nearly 2,000 species.METE successfully captured 96% and 89% of the variation in the rank distribution of species abundance and individual size but performed poorly when characterizing the size-density relationship and intraspecific distribution of individual size. Specifically, METE predicted a negative correlation between size and species abundance, which is weak in natural communities. By evaluating multiple predictions with large quantities of data, our study not only identifies a mismatch between abundance and body size in METE but also demonstrates the importance of conducting strong tests of ecological theories.
Stoll, Robert R
1979-01-01
Set Theory and Logic is the result of a course of lectures for advanced undergraduates, developed at Oberlin College for the purpose of introducing students to the conceptual foundations of mathematics. Mathematics, specifically the real number system, is approached as a unity whose operations can be logically ordered through axioms. One of the most complex and essential of modern mathematical innovations, the theory of sets (crucial to quantum mechanics and other sciences), is introduced in a most careful concept manner, aiming for the maximum in clarity and stimulation for further study in
Sample size calculation in metabolic phenotyping studies.
Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J
2015-09-01
The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Desu, M M
2012-01-01
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
Protein side-chain packing problem: a maximum edge-weight clique algorithmic approach.
Dukka Bahadur, K C; Tomita, Etsuji; Suzuki, Jun'ichi; Akutsu, Tatsuya
2005-02-01
"Protein Side-chain Packing" has an ever-increasing application in the field of bio-informatics, dating from the early methods of homology modeling to protein design and to the protein docking. However, this problem is computationally known to be NP-hard. In this regard, we have developed a novel approach to solve this problem using the notion of a maximum edge-weight clique. Our approach is based on efficient reduction of protein side-chain packing problem to a graph and then solving the reduced graph to find the maximum clique by applying an efficient clique finding algorithm developed by our co-authors. Since our approach is based on deterministic algorithms in contrast to the various existing algorithms based on heuristic approaches, our algorithm guarantees of finding an optimal solution. We have tested this approach to predict the side-chain conformations of a set of proteins and have compared the results with other existing methods. We have found that our results are favorably comparable or better than the results produced by the existing methods. As our test set contains a protein of 494 residues, we have obtained considerable improvement in terms of size of the proteins and in terms of the efficiency and the accuracy of prediction.
... Romaine lettuce) One medium baked potato is a computer mouse To control your portion sizes when you ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Maximum Likelihood Under Response Biased Sampling\\ud
Chambers, Raymond; Dorfman, Alan; Wang, Suojin
2003-01-01
Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
M. Retel Helmrich (Mathijn Jan)
2013-01-01
textabstractThe lot-sizing problem concerns a manufacturer that needs to solve a production planning problem. The producer must decide at which points in time to set up a production process, and when he/she does, how much to produce. There is a trade-off between inventory costs and costs associated
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Set Covering Problems with General Objective Functions
Cardinal, Jean
2008-01-01
We introduce a parameterized version of set cover that generalizes several previously studied problems. Given a ground set V and a collection of subsets S_i of V, a feasible solution is a partition of V such that each subset of the partition is included in one of the S_i. The problem involves maximizing the mean subset size of the partition, where the mean is the generalized mean of parameter p, taken over the elements. For p=-1, the problem is equivalent to the classical minimum set cover problem. For p=0, it is equivalent to the minimum entropy set cover problem, introduced by Halperin and Karp. For p=1, the problem includes the maximum-edge clique partition problem as a special case. We prove that the greedy algorithm simultaneously approximates the problem within a factor of (p+1)^1/p for any p in R^+, and that this is the best possible unless P=NP. These results both generalize and simplify previous results for special cases. We also consider the corresponding graph coloring problem, and prove several tr...
Constructive Sets in Computable Sets
傅育熙
1997-01-01
The original interpretation of the constructive set theory CZF in Martin-Loef's type theory uses the‘extensional identity types’.It is generally believed that these‘types’do not belong to type theory.In this paper it will be shown that the interpretation goes through without identity types.This paper will also show that the interpretation can be given in an intensional type theory.This reflects the computational nature of the interpretation.This computational aspect is reinforced by an ω-Set moel of CZF.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
Maximum Matchings in Random Bipartite Graphs and the Space Utilization of Cuckoo Hashtables
Frieze, Alan
2009-01-01
We study the the following question in Random Graphs. We are given two disjoint sets $L,R$ with $|L|=n=\\alpha m$ and $|R|=m$. We construct a random graph $G$ by allowing each $x\\in L$ to choose $d$ random neighbours in $R$. The question discussed is as to the size $\\mu(G)$ of the largest matching in $G$. When considered in the context of Cuckoo Hashing, one key question is as to when is $\\mu(G)=n$ whp? We answer this question exactly when $d$ is at least four. We also establish a precise threshold for when Phase 1 of the Karp-Sipser Greedy matching algorithm suffices to compute a maximum matching whp.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Hierarchical Sets: Analyzing Pangenome Structure through Scalable Set Visualizations
Pedersen, Thomas Lin
2017-01-01
The increase in available microbial genome sequences has resulted in an increase in the size of the pangenomes being analyzed. Current pangenome visualizations are not intended for the pangenome sizes possible today and new approaches are necessary in order to convert the increase in available...... information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes...... along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do...
Hierarchical Sets: Analyzing Pangenome Structure through Scalable Set Visualizations
Pedersen, Thomas Lin
2017-01-01
The increase in available microbial genome sequences has resulted in an increase in the size of the pangenomes being analyzed. Current pangenome visualizations are not intended for the pangenome sizes possible today and new approaches are necessary in order to convert the increase in available information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. We pre...
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Maximum-entropy closure of hydrodynamic moment hierarchies including correlations.
Hughes, Keith H; Burghardt, Irene
2012-06-07
Generalized hydrodynamic moment hierarchies are derived which explicitly include nonequilibrium two-particle and higher-order correlations. The approach is adapted to strongly correlated media and nonequilibrium processes on short time scales which necessitate an explicit treatment of time-evolving correlations. Closure conditions for the extended moment hierarchies are formulated by a maximum-entropy approach, generalizing related closure procedures for kinetic equations. A self-consistent set of nonperturbative dynamical equations are thus obtained for a chosen set of single-particle and two-particle (and possibly higher-order) moments. Analytical results are derived for generalized Gaussian closures including the dynamic pair distribution function and a two-particle correction to the current density. The maximum-entropy closure conditions are found to involve the Kirkwood superposition approximation.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Brand, Judith, Ed.
1995-01-01
"Exploring" is a magazine of science, art, and human perception that communicates ideas museum exhibits cannot demonstrate easily by using experiments and activities for the classroom. This issue concentrates on size, examining it from a variety of viewpoints. The focus allows students to investigate and discuss interconnections among…
Hansen, Pelle Guldborg; Jespersen, Andreas Maaløe; Skov, Laurits Rhoden
2015-01-01
Objectives We examined how a reduction in plate size would affect the amount of food waste from leftovers in a field experiment at a standing lunch for 220 CEOs. Methods A standing lunch for 220 CEOs in the Danish Opera House was arranged to feature two identical buffets with plates of two differ...
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Size. 51.1859 Section 51.1859 Agriculture Regulations... Standards for Fresh Tomatoes 1 Size § 51.1859 Size. (a) The size of tomatoes packed in any standard type shipping container shall be specified and marked according to one of the size designations set forth in...
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
The Maximum Patch Method for Directional Dark Matter Detection
Henderson, Shawn; Fisher, Peter
2008-01-01
Present and planned dark matter detection experiments search for WIMP-induced nuclear recoils in poorly known background conditions. In this environment, the maximum gap statistical method provides a way of setting more sensitive cross section upper limits by incorporating known signal information. We give a recipe for the numerical calculation of upper limits for planned directional dark matter detection experiments, that will measure both recoil energy and angle, based on the gaps between events in two-dimensional phase space.
STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS
无
2001-01-01
The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.
Forst, Michael
2012-11-01
The shakeout in the solar cell and module industry is in full swing. While the number of companies and production locations shutting down in the Western world is increasing, the capacity expansion in the Far East seems to be unbroken. Size in combination with a good sales network has become the key to success for surviving in the current storm. The trade war with China already looming on the horizon is adding to the uncertainties. (orig.)
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Independent sets in chain cacti
Sedlar, Jelena
2011-01-01
In this paper chain cacti are considered. First, for two specific classes of chain cacti (orto-chains and meta-chains of cycles with h vertices) the recurrence relation for independence polynomial is derived. That recurrence relation is then used in deriving explicit expressions for independence number and number of maximum independent sets for such chains. Also, the recurrence relation for total number of independent sets for such graphs is derived. Finaly, the proof is provided that orto-ch...
付甫永; 王健; 司徒春南
2009-01-01
研究了3种不同规格胸径的诱木在每公顷设置1、3、5和10株诱木的林分内诱集松褐天牛的效果.采用在林内均匀挂设诱捕器作对照,同一面积内设置不同密度和不同胸径的诱木作对比试验,观察记录诱捕器诱捕成虫和诱木诱集幼虫的情况,分析总结了不同密度下的3种胸径诱木的诱集幼虫效果,提出了按胸径5～15 cm和每公顷3株的密度设置诱木防治松材线虫病的传媒昆虫松褐天牛效果最佳.%Studied three different specifications of the induced tree diameter at breast height per hectare set in 1,3,5 and 10 induced wood stands Monochamus alternatus trapping effect, the use of uniform hanging in the forest set traps for the control, the same area of different density and different diameter of the wood induced contrast tests, observation of adult traps and trap trees induced trapping of the larvae, the analysis summed up in three different densities of wood diameter induced larvae trapping effect, put forward a by 5-15 cm diameter and 3 per hectare, the density of wood set induced pine wilt disease prevention and control of the media of insect Monochamus alternatus best.
Generalized bounds on the partial periodic correlation of complex roots of unity sequence set
无
2008-01-01
In this paper, the generalized bounds are derived on the partial periodic correlation of complex roots of unity sequence set with zero or low correlation zone (ZCZ/LCZ) as the important criteria of the sequence design and application. The derived bounds are with respect to family size, subsequence length, maximum partial autocorrelation sidelobe, maximum partial crosscorrelation value and the ZCZ/LCZ. The results show that the derived bounds include the previous periodic bounds, such as Sarwate bound, Welch bound, Peng-Fan bound and Paterson-Lothian bound, as special cases.
Takagi, Mari; Kojima, Takashi; Ichikawa, Kei; Tanaka, Yoshiki; Kato, Yukihito; Horai, Rie; Tamaoki, Akeno; Ichikawa, Kazuo
2017-01-01
The current study reports comparing the postoperative mechanical properties of the anterior capsule between femtosecond laser capsulotomy (FLC) and continuous curvilinear capsulorhexis (CCC) of variable size and shape in porcine eyes. All CCCs were created using capsule forceps. Irregular or eccentric CCCs were also created to simulate real cataract surgery. For FLC, capsulotomies 5.3 mm in diameter were created using the LenSx® (Alcon) platform. Fresh porcine eyes were used in all experiments. The edges of the capsule openings were pulled at a constant speed using two L-shaped jigs. Stretch force and distance were recorded over time, and the maximum values in this regard were defined as those that were recorded when the capsule broke. There was no difference in maximum stretch force between CCC and FLC. There were no differences in circularity between FLC and same-sized CCC. However, same-sized CCC did show significantly higher maximum stretch forces than FLC. Teardrop-shaped CCC showed lower maximum stretch forces than same-sized CCC and FLC. Heart-shaped CCC showed lower maximum stretch forces than same-sized CCC. Conclusively, while capsule edge strength after CCC varied depending on size or irregularities, FLC had the advantage of stable maximum stretch forces.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles
Paulo H. Egydio
2008-01-01
Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.
$\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs
van de Geer, Sara
2012-01-01
We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.
许新勇; 齐志诚; 刘宪亮
2012-01-01
A 3-D finite element model of the structure of the extended foundation for wind turbine-generator set is established, and then various foundation reinforcement patterns are analyzed in accordance with the complexity of wind load. Furthermore, the impacts from the patterns of the bi-directional orthogonal reinforcement, the diametral circular fabric reinforcement, etc. on the status of structure stress and displacement are studied by introducing the contact properties and the concrete material nonlinearity into the study along with the analysis on the degrees of the impacts from various loading effects; from which the law of the foundation stress and displacement is defined. The calculation result not only gives a reasonably designed and economic method for the arrangement of reinforcement for the foundation of wind turbine-generator set, but also provides a scientific basis for the design and optimization of the foundation.%建立了风机扩展式基础结构的三维有限元数值模型,结合风荷载的复杂性,对基础结构各种配筋方式进行计算分析.引入接触设置和混凝土材料非线性,研究了双向正交与径环向配筋等方式对结构受力和变位的影响,分析了不同荷载作用方式的影响程度,明确了基础结构受力和变位规律.计算结果提出了风机基础设计合理的、经济的配筋方法,为基础的设计和优化提供了科学依据.
Carlos A. L. Pires
2013-02-01
Full Text Available The Minimum Mutual Information (MinMI Principle provides the least committed, maximum-joint-entropy (ME inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values generated by constraining sets Tcr comprehended by mcr linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy are imposed by single morphisms of the original random variables. N-asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing Tcr leads to an increasing MinMI, converging eventually to the total MI. Under N-sized samples, the MinMI increment relative to two encapsulated sets Tcr1 ⊂ Tcr2 (with numbers of constraints mcr1
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Ivan Gregor
2013-06-01
Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
Maximum likelihood Jukes-Cantor triplets: analytic solutions.
Chor, Benny; Hendy, Michael D; Snir, Sagi
2006-03-01
Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.
Kuzyakov, Yakov; Razavi, Bahar
2017-04-01
Estimation of the soil volume affected by roots - the rhizosphere - is crucial to assess the effects of plants on properties and processes in soils and dynamics of nutrients, water, microorganisms and soil organic matter. The challenges to assess the rhizosphere size are: 1) the continuum of properties between the root surface and root-free soil, 2) differences in the distributions of various properties (carbon, microorganisms and their activities, various nutrients, enzymes, etc.) along and across the roots, 3) temporal changes of properties and processes. Thus, to describe the rhizosphere size and root effects, a holistic approach is necessary. We collected literature and own data on the rhizosphere gradients of a broad range of physico-chemical and biological properties: pH, CO2, oxygen, redox potential, water uptake, various nutrients (C, N, P, K, Ca, Mg, Mn and Fe), organic compounds (glucose, carboxylic acids, amino acids), activities of enzymes of C, N, P and S cycles. The collected data were obtained based on the destructive approaches (thin layer slicing), rhizotron studies and in situ visualization techniques: optodes, zymography, sensitive gels, 14C and neutron imaging. The root effects were pronounced from less than 0.5 mm (nutrients with slow diffusion) up to more than 50 mm (for gases). However, the most common effects were between 1 - 10 mm. Sharp gradients (e.g. for P, carboxylic acids, enzyme activities) allowed to calculate clear rhizosphere boundaries and so, the soil volume affected by roots. The first analyses were done to assess the effects of soil texture and moisture as well as root system and age on these gradients. The most properties can be described by two curve types: exponential saturation and S curve, each with increasing and decreasing concentration profiles from the root surface. The gradient based distribution functions were calculated and used to extrapolate on the whole soil depending on the root density and rooting intensity. We
Size reduction of the transfer matrix of two-dimensional Ising and Potts models
M. Ghaemi
2003-12-01
Full Text Available A new algebraic method is developed to reduce the size of the transfer matrix of Ising and three-state Potts ferromagnets on strips of width r sites of square and triangular lattices. This size reduction has been set up in such a way that the maximum eigenvalues of both the reduced and the original transfer matrices became exactly the same. In this method we write the original transfer matrix in a special blocked form in such a way that the sums of row elements of a block of the original transfer matrix be the same. The reduced matrix is obtained by replacing each block of the original transfer matrix with the sum of the elements of one of its rows. Our method results in significant matrix size reduction which is a crucial factor in determining the maximum eigenvalue.
Independent sets in chain cacti
Sedlar, Jelena
2011-01-01
In this paper chain cacti are considered. First, for two specific classes of chain cacti (orto-chains and meta-chains of cycles with h vertices) the recurrence relation for independence polynomial is derived. That recurrence relation is then used in deriving explicit expressions for independence number and number of maximum independent sets for such chains. Also, the recurrence relation for total number of independent sets for such graphs is derived. Finaly, the proof is provided that orto-chains and meta-chains are the only extremal chain cacti with respect to total number of independent sets (orto-chains minimal and meta-chains maximal).
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
张义; 陈欢
2012-01-01
[Objective]Tree morphological indices were studied in order to provide scientific references for good-quality citrus fruits cultivation. [Method]Relationships among mother fruit-bearing branch, type of fruit-bearing branch, fruitsetting rate and fruit diameter were investigated in 21-year-old Satsuma mandarin Guoqing 1 trees. [Result]The length and thickness of mother fruit-bearing branch ranged from 0.13-46.50 and 0.10-0.92 cm, respectively. Longer and thicker mother branches were found to have more fruit-bearing branches with leaves. The fruit-setting rate was higher in fruit bearing branch with leaves compared to those without leaves. The fruit-setting rates of both the fruit-bearing branch with and without leaves were the highest when the diameter of mother fruit-bearing branch was less than or equal to 0.20 cm. But it was the highest in fruit-bearing branch with leaves, when the length of mother fruit-bearing branch was less than or equal to 10.0 cm. No significant correlations were observed between the diameter and length of mother fruit-bearing branch and fruit-setting rate of fruit-bearing branch with or without leaves. The fruit number on the fruit-bearing branch with 1 to 3 leaves was the highest, accounting for 87.6%. Diameter and length of mother fruit-bearing branch was positively correlated with fruit diameter, nevertheless fruit diameter had no correlations with the number of leaves in fruit-bearing branch. Lastly, fruits of the fruit-bearing branch with leaves were generally bigger than those without leaves. [Conclusion]Trees of Satsuma mandarin have large number of relatively small and short mother fruit-bearing branches. However, thicker and stronger mother fruit-bearing branch can produce more fruit-bearing branches with leaves leading to higher fruit-setting rate and bigger fruits. Hence, maintaining thicker and stronger mother fruit-bearing branch should be taken into account in its production.%[目的]研究温州蜜柑的树相指标,为柑
AN EFFICIENT APPROXIMATE MAXIMUM LIKELIHOOD SIGNAL DETECTION FOR MIMO SYSTEMS
Cao Xuehong
2007-01-01
This paper proposes an efficient approximate Maximum Likelihood (ML) detection method for Multiple-Input Multiple-Output (MIMO) systems, which searches local area instead of exhaustive search and selects valid search points in each transmit antenna signal constellation instead of all hyperplane. Both of the selection and search complexity can be reduced significantly. The method performs the tradeoff between computational complexity and system performance by adjusting the neighborhood size to select the valid search points. Simulation results show that the performance is comparable to that of the ML detection while the complexity is only as the small fraction of ML.
Maximum likelihood characterization of rotationally symmetric distributions on the sphere
Duerinckx, Mitia; Ley, Christophe
2012-01-01
A classical characterization result, which can be traced back to Gauss, states that the maximum likelihood estimator (MLE) of the location parameter equals the sample mean for any possible univariate samples of any possible sizes n if and only if the samples are drawn from a Gaussian population. A similar result, in the two-dimensional case, is given in von Mises (1918) for the Fisher-von Mises-Langevin (FVML) distribution, the equivalent of the Gaussian law on the unit circle. Half a century...
Maximum likelihood characterization of rotationally symmetric distributions on the sphere
Duerinckx, Mitia; Ley, Christophe
2012-01-01
A classical characterization result, which can be traced back to Gauss, states that the maximum likelihood estimator (MLE) of the location parameter equals the sample mean for any possible univariate samples of any possible sizes n if and only if the samples are drawn from a Gaussian population. A similar result, in the two-dimensional case, is given in von Mises (1918) for the Fisher-von Mises-Langevin (FVML) distribution, the equivalent of the Gaussian law on the unit circle. Half a century...
Maximum kinetic energy considerations in proton stereotactic radiosurgery.
Sengbusch, Evan R; Mackie, Thomas R
2011-04-12
The purpose of this study was to determine the maximum proton kinetic energy required to treat a given percentage of patients eligible for stereotactic radiosurgery (SRS) with coplanar arc-based proton therapy, contingent upon the number and location of gantry angles used. Treatment plans from 100 consecutive patients treated with SRS at the University of Wisconsin Carbone Cancer Center between June of 2007 and March of 2010 were analyzed. For each target volume within each patient, in-house software was used to place proton pencil beam spots over the distal surface of the target volume from 51 equally-spaced gantry angles of up to 360°. For each beam spot, the radiological path length from the surface of the patient to the distal boundary of the target was then calculated along a ray from the gantry location to the location of the beam spot. This data was used to generate a maximum proton energy requirement for each patient as a function of the arc length that would be spanned by the gantry angles used in a given treatment. If only a single treatment angle is required, 100% of the patients included in the study could be treated by a proton beam with a maximum kinetic energy of 118 MeV. As the length of the treatment arc is increased to 90°, 180°, 270°, and 360°, the maximum energy requirement increases to 127, 145, 156, and 179 MeV, respectively. A very high percentage of SRS patients could be treated at relatively low proton energies if the gantry angles used in the treatment plan do not span a large treatment arc. Maximum proton kinetic energy requirements increase linearly with size of the treatment arc.
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Acoustic space dimensionality selection and combination using the maximum entropy principle
Abdel-Haleem, Yasser H.; Renals, Steve; Lawrence, Neil D.
2004-01-01
In this paper we propose a discriminative approach to acoustic space dimensionality selection based on maximum entropy modelling. We form a set of constraints by composing the acoustic space with the space of phone classes, and use a continuous feature formulation of maximum entropy modelling to select an optimal feature set. The suggested approach has two steps: (1) the selection of the best acoustic space that efficiently and economically represents the acoustic data and its variability;...
A New Detection Approach Based on the Maximum Entropy Model
DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua
2006-01-01
The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.
Guillemot, Sylvain
2008-01-01
Given a set of leaf-labeled trees with identical leaf sets, the well-known "Maximum Agreement SubTree" problem (MAST) consists of finding a subtree homeomorphically included in all input trees and with the largest number of leaves. Its variant called "Maximum Compatible Tree" (MCT) is less stringent, as it allows the input trees to be refined. Both problems are of particular interest in computational biology, where trees encountered have often small degrees. In this paper, we study the parameterized complexity of MAST and MCT with respect to the maximum degree, denoted by D, of the input trees. It is known that MAST is polynomial for bounded D. As a counterpart, we show that the problem is W[1]-hard with respect to parameter D. Moreover, elying on recent advances in parameterized complexity we obtain a tight lower bound: while MAST can be solved in O(N^{O(D)}) time where N denotes the input length, we show that an O(N^{o(D)}) bound is not achievable, unless SNP is contained in SE. We also show that MCT is W[1...
S. Santana Porbén
2007-02-01
definition of the size and composition of an hospital NSG are presented in this article, along with the responsabilities, functions and tasks to be assumed by its members, and a timetable for its implementation, always from the experiencies of the authors after conducting a NSG in a tertiary-care hospital in Havana (Cuba.
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Maximum, minimum, and optimal mutation rates in dynamic environments
Ancliff, Mark; Park, Jeong-Man
2009-12-01
We analyze the dynamics of the parallel mutation-selection quasispecies model with a changing environment. For an environment with the sharp-peak fitness function in which the most fit sequence changes by k spin flips every period T , we find analytical expressions for the minimum and maximum mutation rates for which a quasispecies can survive, valid in the limit of large sequence size. We find an asymptotic solution in which the quasispecies population changes periodically according to the periodic environmental change. In this state we compute the mutation rate that gives the optimal mean fitness over a period. We find that the optimal mutation rate per genome, k/T , is independent of genome size, a relationship which is observed across broad groups of real organisms.
Falzone, E; Pasquier, P; Hoffmann, C; Barbier, O; Boutonnet, M; Salvadori, A; Jarrassier, A; Renner, J; Malgras, B; Mérat, S
2017-02-01
Triage, a medical term derived from the French word "trier", is the practical process of sorting casualties to rationally allocate limited resources. In combat settings with limited medical resources and long transportation times, triage is challenging since the objectives are to avoid overcrowding medical treatment facilities while saving a maximum of soldiers and to get as many of them back into action as possible. The new face of modern warfare, asymmetric and non-conventional, has led to the integrative evolution of triage into the theatre of operations. This article defines different triage scores and algorithms currently implemented in military settings. The discrepancies associated with these military triage systems are highlighted. The assessment of combat casualty severity requires several scores and each nation adopts different systems for triage on the battlefield with the same aim of quickly identifying those combat casualties requiring lifesaving and damage control resuscitation procedures. Other areas of interest for triage in military settings are discussed, including predicting the need for massive transfusion, haemodynamic parameters and ultrasound exploration.
Robust stochastic maximum principle: Complete proof and discussions
Poznyak Alex S.
2002-01-01
Full Text Available This paper develops a version of Robust Stochastic Maximum Principle (RSMP applied to the Minimax Mayer Problem formulated for stochastic differential equations with the control-dependent diffusion term. The parametric families of first and second order adjoint stochastic processes are introduced to construct the corresponding Hamiltonian formalism. The Hamiltonian function used for the construction of the robust optimal control is shown to be equal to the Lebesque integral over a parametric set of the standard stochastic Hamiltonians corresponding to a fixed value of the uncertain parameter. The paper deals with a cost function given at finite horizon and containing the mathematical expectation of a terminal term. A terminal condition, covered by a vector function, is also considered. The optimal control strategies, adapted for available information, for the wide class of uncertain systems given by an stochastic differential equation with unknown parameters from a given compact set, are constructed. This problem belongs to the class of minimax stochastic optimization problems. The proof is based on the recent results obtained for Minimax Mayer Problem with a finite uncertainty set [14,43-45] as well as on the variation results of [53] derived for Stochastic Maximum Principle for nonlinear stochastic systems under complete information. The corresponding discussion of the obtain results concludes this study.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
An improved maximum power point tracking method for a photovoltaic system
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
Ullas Thomas
2015-07-01
Full Text Available This paper contains certain properties of set-magic graphs and obtained the set-magic number of certain classes of graphs. All spanning super graphs of a set-magic graph always set-magic and all cycles and Hamiltonian graphs are set-magic. Also set-magic number of any cycle of size 2n is always greater than n.
R Wave Extraction Based on the Maximum First Derivative plus the Maximum Value of the Double Search
Wen-po Yao; Wen-li Yao; Min Wu; Tie-bing Liu
2016-01-01
R-wave detection is the main approach for heart rate variability analysis and clinical application based on R-R interval. The maximum ifrst derivative plus the maximum value of the double search algorithm is applied on electrocardiogram (ECG) of MIH-BIT Arrhythmia Database to extract R wave. Through the study of algorithm's characteristics and R-wave detection method, data segmentation method is modified to improve the detection accuracy. After segmentation modification, average accuracy rate of 6 sets of short ECG data increase from 82.51% to 93.70%, and the average accuracy rate of 11 groups long-range data is 96.61%. Test results prove that the algorithm and segmentation method can accurately locate R wave and have good effectiveness and versatility, but may exist some undetected problems due to algorithm implementation.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Alexandre ten Caten
2013-04-01
Full Text Available Digital information generates the possibility of a high degree of redundancy in the data available for fitting predictive models used for Digital Soil Mapping (DSM. Among these models, the Decision Tree (DT technique has been increasingly applied due to its capacity of dealing with large datasets. The purpose of this study was to evaluate the impact of the data volume used to generate the DT models on the quality of soil maps. An area of 889.33 km² was chosen in the Northern region of the State of Rio Grande do Sul. The soil-landscape relationship was obtained from reambulation of the studied area and the alignment of the units in the 1:50,000 scale topographic mapping. Six predictive covariates linked to the factors soil formation, relief and organisms, together with data sets of 1, 3, 5, 10, 15, 20 and 25 % of the total data volume, were used to generate the predictive DT models in the data mining program Waikato Environment for Knowledge Analysis (WEKA. In this study, sample densities below 5 % resulted in models with lower power of capturing the complexity of the spatial distribution of the soil in the study area. The relation between the data volume to be handled and the predictive capacity of the models was best for samples between 5 and 15 %. For the models based on these sample densities, the collected field data indicated an accuracy of predictive mapping close to 70 %.Informações digitais tornam possível um elevado grau de redundância das informações disponíveis para o ajuste de modelos preditores aplicados ao Mapeamento Digital de Solos (MDS. Entre esses modelos, a técnica de Árvores de Decisão (AD tem aplicação crescente, em razão da sua potência no tratamento de grandes volumes de dados. Objetivou-se com este trabalho avaliar o impacto do volume de dados utilizados para gerar os modelos por AD, na qualidade dos mapas de solos gerados pela técnica de MDS. Uma área de estudo com 889,33 km² foi escolhida na região do
Bayesian and maximum likelihood estimation of genetic maps
York, Thomas L.; Durrett, Richard T.; Tanksley, Steven;
2005-01-01
There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances...
A simple approach for maximum heat recovery calculations
Jezowski, J. (Wroclaw Technical Univ. (PL). Inst. of Chemical Engineering and Heating Equipment); Friedler, F. (Hungarian Academy of Sciences, Egyetem (HU). Research Inst. for Technical Chmeistry)
1992-04-01
This paper addresses the problem of calculating the maximum heat energy recovery for a given set of process streams. Simple, straightforward algorithms of calculations are presented that account for tasks with multiple utilities, forbidden matches and nonpoint utilities. A new way of applying the so-called dual-stream approach to reduce utility usage for tasks with forbidden matches is also given in this paper. The calculation methods do not require computer programs and mathematical programming application. They give the user a proper insight into a problem to understand heat integration as well as to recognize options and traps in heat exchanger network synthesis. (author).
A maximum feasible subset algorithm with application to radiation therapy
Sadegh, Payman
1999-01-01
inequalities. Special classes of this problem are of interest in a variety of areas such as pattern recognition, machine learning, operations research, and medical treatment planning. This problem is generally solvable in exponential time. A heuristic polynomial time algorithm is presented in this paper......Consider a set of linear one sided or two sided inequality constraints on a real vector X. The problem of interest is selection of X so as to maximize the number of constraints that are simultaneously satisfied, or equivalently, combinatorial selection of a maximum cardinality subset of feasible...
Application of the maximum entropy method to profile analysis
Armstrong, N.; Kalceff, W. [University of Technology, Department of Applied Physics, Sydney, NSW (Australia); Cline, J.P. [National Institute of Standards and Technology, Gaithersburg, (United States)
1999-12-01
Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc.
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel
2016-11-01
The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Yan, Xiaozhen; Xie, Wu; Xu, Zhen
2016-12-01
Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy.
Exact and approximation algorithms for DNA tag set design.
Măndoiu, Ion I; Trincă, Dragoş
2006-04-01
In this paper, we propose new solution methods for designing tag sets for use in universal DNA arrays. First, we give integer linear programming formulations for two previous formalizations of the tag set design problem. We show that these formulations can be solved to optimality for problem instances of moderate size by using general purpose optimization packages and also give more scalable algorithms based on an approximation scheme for packing linear programs. Second, we note the benefits of periodic tags and establish an interesting connection between the tag design problem and the problem of packing the maximum number of vertex-disjoint directed cycles in a given graph. We show that combining a simple greedy cycle packing algorithm with a previously proposed alphabetic tree search strategy yields an increase of over 40% in the number of tags compared to previous methods.
Hawton, Keith
2011-06-10
Abstract Background In order to reduce fatal self-poisoning legislation was introduced in the UK in 1998 to restrict pack sizes of paracetamol sold in pharmacies (maximum 32 tablets) and non-pharmacy outlets (maximum 16 tablets), and in Ireland in 2001, but with smaller maximum pack sizes (24 and 12 tablets). Our aim was to determine whether this resulted in smaller overdoses of paracetamol in Ireland compared with the UK. Methods We used data on general hospital presentations for non-fatal self-harm for 2002 - 2007 from the Multicentre Study of Self-harm in England (six hospitals), and from the National Registry of Deliberate Self-harm in Ireland. We compared sizes of overdoses of paracetamol in the two settings. Results There were clear peaks in numbers of non-fatal overdoses, associated with maximum pack sizes of paracetamol in pharmacy and non-pharmacy outlets in both England and Ireland. Significantly more pack equivalents (based on maximum non-pharmacy pack sizes) were used in overdoses in Ireland (mean 2.63, 95% CI 2.57-2.69) compared with England (2.07, 95% CI 2.03-2.10). The overall size of overdoses did not differ significantly between England (median 22, interquartile range (IQR) 15-32) and Ireland (median 24, IQR 12-36). Conclusions The difference in paracetamol pack size legislation between England and Ireland does not appear to have resulted in a major difference in sizes of overdoses. This is because more pack equivalents are taken in overdoses in Ireland, possibly reflecting differing enforcement of sales advice. Differences in access to clinical services may also be relevant.
Waters Keith
2011-06-01
Full Text Available Abstract Background In order to reduce fatal self-poisoning legislation was introduced in the UK in 1998 to restrict pack sizes of paracetamol sold in pharmacies (maximum 32 tablets and non-pharmacy outlets (maximum 16 tablets, and in Ireland in 2001, but with smaller maximum pack sizes (24 and 12 tablets. Our aim was to determine whether this resulted in smaller overdoses of paracetamol in Ireland compared with the UK. Methods We used data on general hospital presentations for non-fatal self-harm for 2002 - 2007 from the Multicentre Study of Self-harm in England (six hospitals, and from the National Registry of Deliberate Self-harm in Ireland. We compared sizes of overdoses of paracetamol in the two settings. Results There were clear peaks in numbers of non-fatal overdoses, associated with maximum pack sizes of paracetamol in pharmacy and non-pharmacy outlets in both England and Ireland. Significantly more pack equivalents (based on maximum non-pharmacy pack sizes were used in overdoses in Ireland (mean 2.63, 95% CI 2.57-2.69 compared with England (2.07, 95% CI 2.03-2.10. The overall size of overdoses did not differ significantly between England (median 22, interquartile range (IQR 15-32 and Ireland (median 24, IQR 12-36. Conclusions The difference in paracetamol pack size legislation between England and Ireland does not appear to have resulted in a major difference in sizes of overdoses. This is because more pack equivalents are taken in overdoses in Ireland, possibly reflecting differing enforcement of sales advice. Differences in access to clinical services may also be relevant.
Adaptive edge image enhancement based on maximum fuzzy entropy
ZHANG Xiu-hua; YANG Kun-tao
2006-01-01
Based on the maximum fuzzy entropy principle,the edge image with low contrast is optimally classified into two classes adaptively,under the condition of probability partition and fuzzy partition.The optimal threshold is used as the classified threshold value,and a local parametric gray-level transformation is applied to the obtained classes.By means of two parameters representing,the homogeneity of the regions in edge image is improved.The excellent performance of the proposed technique is exercisable through simulation results on a set of test images.It is shown how the extracted and enhanced edges provide an efficient edge-representation of images.It is shown that the proposed technique possesses excellent performance in homogeneity through simulations on a set of test images,and the extracted and enhanced edges provide an efficient edge-representation of images.
Gaussian maximum likelihood and contextual classification algorithms for multicrop classification
Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.
1987-01-01
The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.
Fuzzy Mathematics for Raw Silk Size Control
HU Zheng-yu; YU Hai-feng; GU Ping
2008-01-01
With photographing and experiments,this paper divides the cocoon layers into three categories according to their colors,establishes three-color membership function based on fuzzy mathemtics,constructs fuzzy sets which satisfy the range of size contrd by using the ordinary set and attached fiequency of three color cocoons combination,then achieves the ordinary sets of range of size control by choosing λ-cut.Under these ordinary sets,each end does duality relative level,then sets up relative matrix and overall sequence and finds the membership function to iudge whether the size cmtrol is normal.
On sets without tangents and exterior sets of a conic
Van de Voorde, Geertrui
2012-01-01
A set without tangents in $\\PG(2,q)$ is a set of points S such that no line meets S in exactly one point. An exterior set of a conic $\\mathcal{C}$ is a set of points $\\E$ such that all secant lines of $\\E$ are external lines of $\\mathcal{C}$. In this paper, we first recall some known examples of sets without tangents and describe them in terms of determined directions of an affine pointset. We show that the smallest sets without tangents in $\\PG(2,5)$ are (up to projective equivalence) of two different types. We generalise the non-trivial type by giving an explicit construction of a set without tangents in $\\PG(2,q)$, $q=p^h$, $p>2$ prime, of size $q(q-1)/2-r(q+1)/2$, for all $0\\leq r\\leq (q-5)/2$. After that, a different description of the same set in $\\PG(2,5)$, using exterior sets of a conic, is given and we investigate in which ways a set of exterior points on an external line $L$ of a conic in $\\PG(2,q)$ can be extended with an extra point $Q$ to a larger exterior set of $\\mathcal{C}$. It turns out that ...
Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs
Italiano, Giuseppe F
2010-01-01
In this paper we study minimum cut and maximum flow problems on planar graphs, both in static and in dynamic settings. First, we present an algorithm that given an undirected planar graph computes the minimum cut between any two given vertices in O(n log log n) time. Second, we show how to achieve the same O(n log log n) bound for the problem of computing maximum flows in undirected planar graphs. To the best of our knowledge, these are the first algorithms for those two problems that break the O(n log n) barrier, which has been standing for more than 25 years. Third, we present a fully dynamic algorithm that is able to maintain information about minimum cuts and maximum flows in a plane graph (i.e., a planar graph with a fixed embedding): our algorithm is able to insert edges, delete edges and answer min-cut and max-flow queries between any pair of vertices in O(n^(2/3) log^3 n) time per operation. This result is based on a new dynamic shortest path algorithm for planar graphs which may be of independent int...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
罗桑; 钱振东; 薛永超
2015-01-01
Open-graded friction course (OGFC) is applied to pavement surfaces to increase driving safety under wet conditions, and recently, to reduce tire/pavement noise. The durability of OGFC, however, has been a concern since conventional OGFC mixes last typically less than ten years before major maintenance or rehabilitation is needed. This work investigates a new open-graded asphalt mixture that uses epoxy asphalt as binder to improve mix durability. One type of epoxy asphalt that has been successfully applied to dense-graded asphalt concrete for bridge deck paving was selected. A procedure of compacting the mix into slab specimens was developed and a series of laboratory tests were conducted to evaluate the performance of the new mix, including Cantabro loss, permeability, friction, shear strength, and wheel rutting tests. Results show superior overall performance of the open-graded epoxy asphalt mix compared to conventional open-graded asphalt mix. There are also preliminary indications that the OGFC mix with 4.75-mm NMAS gradation can improve the resistance performance to raveling, while the OGFC mix with 9.5-mm NMAS gradation can improve the performance of surface friction at a high slip speed.
罗桑; 钱振东; 薛永超
2015-01-01
Open-graded friction course(OGFC) is applied to pavement surfaces to increase driving safety under wet conditions, and recently, to reduce tire/pavement noise. The durability of OGFC, however, has been a concern since conventional OGFC mixes last typically less than ten years before major maintenance or rehabilitation is needed. This work investigates a new open-graded asphalt mixture that uses epoxy asphalt as binder to improve mix durability. One type of epoxy asphalt that has been successfully applied to dense-graded asphalt concrete for bridge deck paving was selected. A procedure of compacting the mix into slab specimens was developed and a series of laboratory tests were conducted to evaluate the performance of the new mix, including Cantabro loss, permeability, friction, shear strength, and wheel rutting tests. Results show superior overall performance of the open-graded epoxy asphalt mix compared to conventional open-graded asphalt mix. There are also preliminary indications that the OGFC mix with 4.75-mm NMAS gradation can improve the resistance performance to raveling, while the OGFC mix with 9.5-mm NMAS gradation can improve the performance of surface friction at a high slip speed.
Extremal sizes of subspace partitions
Heden, Olof; Nastase, Esmeralda; Sissokho, Papa
2011-01-01
A subspace partition $\\Pi$ of $V=V(n,q)$ is a collection of subspaces of $V$ such that each 1-dimensional subspace of $V$ is in exactly one subspace of $\\Pi$. The size of $\\Pi$ is the number of its subspaces. Let $\\sigma_q(n,t)$ denote the minimum size of a subspace partition of $V$ in which the largest subspace has dimension $t$, and let $\\rho_q(n,t)$ denote the maximum size of a subspace partition of $V$ in which the smallest subspace has dimension $t$. In this paper, we determine the values of $\\sigma_q(n,t)$ and $\\rho_q(n,t)$ for all positive integers $n$ and $t$. Furthermore, we prove that if $n\\geq 2t$, then the minimum size of a maximal partial $t$-spread in $V(n+t-1,q)$ is $\\sigma_q(n,t)$.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
Metabolic networks evolve towards states of maximum entropy production.
Unrean, Pornkamol; Srienc, Friedrich
2011-11-01
A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles.
Cell size, genome size and the dominance of Angiosperms
Simonin, K. A.; Roddy, A. B.
2016-12-01
Angiosperms are capable of maintaining the highest rates of photosynthetic gas exchange of all land plants. High rates of photosynthesis depends mechanistically both on efficiently transporting water to the sites of evaporation in the leaf and on regulating the loss of that water to the atmosphere as CO2 diffuses into the leaf. Angiosperm leaves are unique in their ability to sustain high fluxes of liquid and vapor phase water transport due to high vein densities and numerous, small stomata. Despite the ubiquity of studies characterizing the anatomical and physiological adaptations that enable angiosperms to maintain high rates of photosynthesis, the underlying mechanism explaining why they have been able to develop such high leaf vein densities, and such small and abundant stomata, is still incomplete. Here we ask whether the scaling of genome size and cell size places a fundamental constraint on the photosynthetic metabolism of land plants, and whether genome downsizing among the angiosperms directly contributed to their greater potential and realized primary productivity relative to the other major groups of terrestrial plants. Using previously published data we show that a single relationship can predict guard cell size from genome size across the major groups of terrestrial land plants (e.g. angiosperms, conifers, cycads and ferns). Similarly, a strong positive correlation exists between genome size and both stomatal density and vein density that together ultimately constrains maximum potential (gs, max) and operational stomatal conductance (gs, op). Further the difference in the slopes describing the covariation between genome size and both gs, max and gs, op suggests that genome downsizing brings gs, op closer to gs, max. Taken together the data presented here suggests that the smaller genomes of angiosperms allow their final cell sizes to vary more widely and respond more directly to environmental conditions and in doing so bring operational photosynthetic
Set Size Effects in the Macaque Striate Cortex.
Landman, R.; Spekreijse, H.; Lamme, V.A.F.
2003-01-01
Attentive processing is often described as a competition for resources among stimuli by mutual suppression. This is supported by findings that activity in extrastriate cortex is suppressed when several stimuli are presented simultaneously, compared to a single stimulus. In this study, we randomly va
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Estimating landscape carrying capacity through maximum clique analysis.
Donovan, Therese M; Warrington, Gregory S; Schwenk, W Scott; Dinitz, Jeffrey H
2012-12-01
Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be
Kim, K. M.; Smetana, P.
1990-03-01
Growth of large diameter Czochralski (CZ) silicon crystals require complete elimination of dislocations by means of Dash technique, where the seed diameter is reduced to a small size typically 3 mm in conjunction with increase in the pull rate. The maximum length of the large CZ silicon is estimated at the fracture stress limit of the seed neck diameter ( d). The maximum lengths for 200 and 300 mm CZ crystals amount to 197 and 87 cm, respectively, with d = 0.3 cm; the estimated maximum weight is 144 kg.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Shape Modelling Using Maximum Autocorrelation Factors
Larsen, Rasmus
2001-01-01
of the training set are in reality a time series, e.g.\\$\\backslash\\$ snapshots of a beating heart during the cardiac cycle or when the shapes are slices of a 3D structure, e.g. the spinal cord. Second, in almost all applications a natural order of the landmark points along the contour of the shape is introduced......This paper addresses the problems of generating a low dimensional representation of the shape variation present in a training set after alignment using Procrustes analysis and projection into shape tangent space. We will extend the use of principal components analysis in the original formulation...... of Active Shape Models by Timothy Cootes and Christopher Taylor by building new information into the model. This new information consists of two types of prior knowledge. First, in many situation we will be given an ordering of the shapes of the training set. This situation occurs when the shapes...
Type Ibn Supernovae Show Photometric Homogeneity and Spectral Diversity at Maximum Light
Hosseinzadeh, Griffin; Arcavi, Iair; Valenti, Stefano; McCully, Curtis; Howell, D. Andrew; Johansson, Joel; Sollerman, Jesper; Pastorello, Andrea; Benetti, Stefano; Cao, Yi; Cenko, S. Bradley; Clubb, Kelsey I.; Corsi, Alessandra; Duggan, Gina; Elias-Rosa, Nancy; Filippenko, Alexei V.; Fox, Ori D.; Fremling, Christoffer; Horesh, Assaf; Karamehmetoglu, Emir; Kasliwal, Mansi; Marion, G. H.; Ofek, Eran; Sand, David; Taddia, Francesco; Zheng, WeiKang; Fraser, Morgan; Gal-Yam, Avishay; Inserra, Cosimo; Laher, Russ; Masci, Frank; Rebbapragada, Umaa; Smartt, Stephen; Smith, Ken W.; Sullivan, Mark; Surace, Jason; Woźniak, Przemek
2017-02-01
Type Ibn supernovae (SNe) are a small yet intriguing class of explosions whose spectra are characterized by low-velocity helium emission lines with little to no evidence for hydrogen. The prevailing theory has been that these are the core-collapse explosions of very massive stars embedded in helium-rich circumstellar material (CSM). We report optical observations of six new SNe Ibn: PTF11rfh, PTF12ldy, iPTF14aki, iPTF15ul, SN 2015G, and iPTF15akq. This brings the sample size of such objects in the literature to 22. We also report new data, including a near-infrared spectrum, on the Type Ibn SN 2015U. In order to characterize the class as a whole, we analyze the photometric and spectroscopic properties of the full Type Ibn sample. We find that, despite the expectation that CSM interaction would generate a heterogeneous set of light curves, as seen in SNe IIn, most Type Ibn light curves are quite similar in shape, declining at rates around 0.1 mag day‑1 during the first month after maximum light, with a few significant exceptions. Early spectra of SNe Ibn come in at least two varieties, one that shows narrow P Cygni lines and another dominated by broader emission lines, both around maximum light, which may be an indication of differences in the state of the progenitor system at the time of explosion. Alternatively, the spectral diversity could arise from viewing-angle effects or merely from a lack of early spectroscopic coverage. Together, the relative light curve homogeneity and narrow spectral features suggest that the CSM consists of a spatially confined shell of helium surrounded by a less dense extended wind.
Efficiency at maximum power of a chemical engine.
Hooyberghs, Hans; Cleuren, Bart; Salazar, Alberto; Indekeu, Joseph O; Van den Broeck, Christian
2013-10-01
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power η(mp) [corrected] takes the form 1/2+cΔμ+O(Δμ(2)), with 1∕2 a universal constant and Δμ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in η(mp) [corrected] is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model, we obtain η(mp) = 1/(θ + 1) [corrected], with θ > 0 the power of Δμ in the transport equation.
Maximum solid solubility of transition metals in vanadium solvent
ZHANG Jin-long; FANG Shou-shi; ZHOU Zi-qiang; LIN Gen-wen; GE Jian-sheng; FENG Feng
2005-01-01
Maximum solid solubility (Cmax) of different transition metals in metal solvent can be described by a semi-empirical equation using function Zf that contains electronegativity difference, atomic diameter and electron concentration. The relation between Cmax and these parameters of transition metals in vanadium solvent was studied.It is shown that the relation of Cmax and function Zf can be expressed as ln Cmax = Zf = 7. 316 5-2. 780 5 (△X)2 -71. 278δ2 -0. 855 56n2/3. The factor of atomic size parameter has the largest effect on the Cmax of the V binary alloy;followed by the factor of electronegativity difference; the electrons concentration has the smallest effect among the three bond parameters. Function Zf is used for predicting the unknown Cmax of the transition metals in vanadium solvent. The results are compared with Darken-Gurry theorem, which can be deduced by the obtained function Zf in this work.
Efficiency at maximum power of a chemical engine
Hooyberghs, Hans; Salazar, Alberto; Indekeu, Joseph O; Broeck, Christian Van den
2013-01-01
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power $\\eta$ takes the form 1/2+c\\Delta \\mu + O(\\Delta \\mu^2), with 1/2 a universal constant and $\\Delta \\mu$ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in $\\eta$ is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model we obtain \\eta = 1/(\\theta +1), with \\theta >0 the power of $\\Delta \\mu$ in the transport equation
Radiation Pressure Acceleration: the factors limiting maximum attainable ion energy
Bulanov, S S; Schroeder, C B; Bulanov, S V; Esirkepov, T Zh; Kando, M; Pegoraro, F; Leemans, W P
2016-01-01
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it trans...
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
On the Threshold of Maximum-Distance Separable Codes
Kindarji, Bruno; Chabanne, Hervé
2010-01-01
Starting from a practical use of Reed-Solomon codes in a cryptographic scheme published in Indocrypt'09, this paper deals with the threshold of linear $q$-ary error-correcting codes. The security of this scheme is based on the intractability of polynomial reconstruction when there is too much noise in the vector. Our approach switches from this paradigm to an Information Theoretical point of view: is there a class of elements that are so far away from the code that the list size is always superpolynomial? Or, dually speaking, is Maximum-Likelihood decoding almost surely impossible? We relate this issue to the decoding threshold of a code, and show that when the minimal distance of the code is high enough, the threshold effect is very sharp. In a second part, we explicit lower-bounds on the threshold of Maximum-Distance Separable codes such as Reed-Solomon codes, and compute the threshold for the toy example that motivates this study.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
METHOD FOR DETERMINING THE MAXIMUM ARRANGEMENT FACTOR OF FOOTWEAR PARTS
DRIŞCU Mariana
2014-05-01
Full Text Available By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. Moreover, the results of this classical methodology may contain many inaccuracies with the most unpleasant consequences for the footwear producer. Thus, the costumer that buys a footwear product by taking into consideration the characteristics written on the product (size, width can notice after a period that the product has flaws because of the inadequate design. In order to avoid this kind of situations, the strictest scientific criteria must be followed when one designs a footwear product. The decisive step in this way has been made some time ago, when, as a result of powerful technical development and massive implementation of electronical calculus systems and informatics, This paper presents a product software for determining all possible arrangements of a footwear product’s reference points, in order to automatically acquire the maximum arrangement factor. The user multiplies the pattern in order to find the economic arrangement for the reference points. In this purpose, the user must probe few arrangement variants, in the translation and rotate-translation system. The same process is used in establishing the arrangement factor for the two points of reference of the designed footwear product. After probing several variants of arrangement in the translation and rotation and translation systems, the maximum arrangement factors are chosen. This allows the user to estimate the material wastes.
Dependence of maximum concentration from chemical accidents on release duration
Hanna, Steven; Chang, Joseph
2017-01-01
Chemical accidents often involve releases of a total mass, Q, of stored material in a tank over a time duration, td, of less than a few minutes. The value of td is usually uncertain because of lack of knowledge of key information, such as the size and location of the hole and the pressure and temperature of the chemical. In addition, it is rare that eyewitnesses or video cameras are present at the time of the accident. For inhalation hazards, serious health effects (such as damage to the respiratory system) are determined by short term averages (pressurized liquefied chlorine releases from tanks are given, focusing on scenarios from the Jack Rabbit I (JR I) field experiment. The analytical calculations and the predictions of the SLAB dense gas dispersion model agree that the ratio of maximum C for two different td's is greatest (as much as a factor of ten) near the source. At large distances (beyond a few km for the JR I scenarios), where tt exceeds both td's, the ratio of maximum C approaches unity.
Maximum likelihood estimation for social network dynamics
Snijders, T.A.B.; Koskinen, J.; Schweinberger, M.
2010-01-01
A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuous-time Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The m
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Effective soil hydraulic conductivity predicted with the maximum power principle
Westhoff, Martijn; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Zehe, Erwin; Dewals, Benjamin
2016-04-01
Drainage of water in soils happens for a large extent through preferential flowpaths, but these subsurface flowpaths are extremely difficult to observe or parameterize in hydrological models. To potentially overcome this problem, thermodynamic optimality principles have been suggested to predict effective parametrization of these (sub-grid) structures, such as the maximum entropy production principle or the equivalent maximum power principle. These principles have been successfully applied to predict heat transfer from the Equator to the Poles, or turbulent heat fluxes between the surface and the atmosphere. In these examples, the effective flux adapts itself to its boundary condition by adapting its effective conductance through the creation of e.g. convection cells. However, flow through porous media, such as soils, can only quickly adapt its effective flow conductance by creation of preferential flowpaths, but it is unknown if this is guided by the aim to create maximum power. Here we show experimentally that this is indeed the case: In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles. The experimental setup consists of two freely draining reservoirs connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. From the steady state potential difference and the observed flow through the aquifer, and effective hydraulic conductance can be determined. This observed conductance does correspond to the one maximizing power of the flux through the confined aquifer. Although this experiment is done in an idealized setting, it opens doors for better parameterizing hydrological models. Furthermore, it shows that hydraulic properties of soils are not static, but they change with changing boundary conditions. A potential limitation to the principle is that it only applies to steady state conditions
Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee
2011-01-01
of the maximum lifetime routing problem that considers the operation modes of the node. Solution of the linear programming gives the upper analytical bound for the network lifetime. In order to illustrate teh application of the optimization model, we solved teh problem for different parameter settings...... protocols, and the energy model for transmission. In this paper, we tackle the routing challenge for maximum lifetime of the sensor network. We introduce a novel linear programming approach to the maximum lifetime routing problem. To the best of our knowledge, this is the first mathematical programming...
Permutation Groups with Bounded Movement having Maximum Orbits
Mehdi Alaeiyan; Behnam Razzaghmaneshi
2012-05-01
Let be a permutation group on a set with no fixed points in and let be a positive integer. If no element of moves any subset of by more than points (that is, $|^g\\backslash|≤ m$ for every $\\subseteq$ and $g\\in G$), and also if each -orbit has size greater than 2, then the number of -orbits in is at most $\\frac{1}{2}(3m-1)$. Moreover, the equality holds if and only if is an elementary abelian 3-group.
Size selectivity of sole gill nets fished in the North Sea
Madsen, Niels; Holst, René; Wileman, D.
1999-01-01
, plaice and cod for each setting of the gear. It was found that a hi-normal form for the selection curve gave the best fits. Mean selection curves were then estimated by combining sets using a model of between-set variation. The ratio between length of maximum retention and mesh size was estimated to be 3.......28 for sole, 2.60 for plaice and 4.56 for cod. Selection curves were also fitted to the catch data pooled over all sets. The model deviance for the sole and plaice data indicated lack of lit when pooling the catch data. (C) 1999 Elsevier Science B.V. All rights reserved....
THE MAXIMUM AND MINIMUM DEGREES OF RANDOM BIPARTITE MULTIGRAPHS
Chen Ailian; Zhang Fuji; Li Hao
2011-01-01
In this paper the authors generalize the classic random bipartite graph model, and define a model of the random bipartite multigraphs as follows: let m=m(n) be a positive integer-valued function on n and (n, m; {pk}) the probability space consisting of all the labeled bipartite multigraphs with two vertex sets A={a1,a2,...,an} and B= {b1, b2,..., bm}, in which the numbers taibj of the edges between any two vertices ai∈A and bj∈B are identically distributed independent random variables with distribution P{taibj}=k}=pk, k=0, 1, 2,..., where pk≥0 and ∑ pk=1. They obtain that Xc,d,A, the number of vertices in A with degree between c and d of Gn,m∈ (n, m;{Pk}) has asymptotically Poisson distribution, and answer the following two questions about the space (n,m; {pk}) with {pk} having geometric distribution, binomial distribution and Poisson distribution, respectively. Under which condition for {Pk} can there be a function D(n) such that almost every random multigraph Gnm∈ (n, m; {pk}) has maximum degree D(n) in A? under which condition for {pk} has almost every multigraph Gn,m∈ (n,m;{pk}) a unique vertex of maximum degree in A?
Exploring the Constrained Maximum Edge-weight Connected Graph Problem
Zhen-ping Li; Shi-hua Zhang; Xiang-Sun Zhang; Luo-nan Chen
2009-01-01
Given an edge weighted graph,the maximum edge-weight connected graph (MECG) is a connected subgraph with a given number of edges and the maximal weight sum.Here we study a special case,i.e.the Constrained Maximum Edge-Weight Connected Graph problem (CMECG),which is an MECG whose candidate subgraphs must include a given set of k edges,then also called the k-CMECG.We formulate the k-CMECG into an integer linear programming model based on the network flow problem.The k-CMECG is proved to be NP-hard.For the special case 1-CMECG,we propose an exact algorithm and a heuristic algorithm respectively.We also propose a heuristic algorithm for the k-CMECG problem.Some simulations have been done to analyze the quality of these algorithms.Moreover,we show that the algorithm for 1-CMECG problem can lead to the solution of the general MECG problem.
Superfast maximum-likelihood reconstruction for quantum tomography
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
On the Performance of Maximum Likelihood Inverse Reinforcement Learning
Ratia, Héctor; Martinez-Cantin, Ruben
2012-01-01
Inverse reinforcement learning (IRL) addresses the problem of recovering a task description given a demonstration of the optimal policy used to solve such a task. The optimal policy is usually provided by an expert or teacher, making IRL specially suitable for the problem of apprenticeship learning. The task description is encoded in the form of a reward function of a Markov decision process (MDP). Several algorithms have been proposed to find the reward function corresponding to a set of demonstrations. One of the algorithms that has provided best results in different applications is a gradient method to optimize a policy squared error criterion. On a parallel line of research, other authors have presented recently a gradient approximation of the maximum likelihood estimate of the reward signal. In general, both approaches approximate the gradient estimate and the criteria at different stages to make the algorithm tractable and efficient. In this work, we provide a detailed description of the different metho...
Delocalized Epidemics on Graphs: A Maximum Entropy Approach
Sahneh, Faryad Darabi; Scoglio, Caterina
2016-01-01
The susceptible--infected--susceptible (SIS) epidemic process on complex networks can show metastability, resembling an endemic equilibrium. In a general setting, the metastable state may involve a large portion of the network, or it can be localized on small subgraphs of the contact network. Localized infections are not interesting because a true outbreak concerns network--wide invasion of the contact graph rather than localized infection of certain sites within the contact network. Existing approaches to localization phenomenon suffer from a major drawback: they fully rely on the steady--state solution of mean--field approximate models in the neighborhood of their phase transition point, where their approximation accuracy is worst; as statistical physics tells us. We propose a dispersion entropy measure that quantifies the localization of infections in a generic contact graph. Formulating a maximum entropy problem, we find an upper bound for the dispersion entropy of the possible metastable state in the exa...
A Maximum-Entropy Method for Estimating the Spectrum
无
2007-01-01
Based on the maximum-entropy (ME) principle, a new power spectral estimator for random waves is derived in the form of ~S(ω)=(a/8)-H2(2π)d+1ω-(d+2)exp[-b(2π/ω)n], by solving a variational problem subject to some quite general constraints. This robust method is comprehensive enough to describe the wave spectra even in extreme wave conditions and is superior to periodogram method that is not suitable to process comparatively short or intensively unsteady signals for its tremendous boundary effect and some inherent defects of FFT. Fortunately, the newly derived method for spectral estimation works fairly well, even though the sample data sets are very short and unsteady, and the reliability and efficiency of this spectral estimator have been preliminarily proved.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
LIBOR troubles: Anomalous movements detection based on maximum entropy
Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria
2016-05-01
According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.
Size, productivity, and international banking
Buch, Claudia M.; Koch, Catherine T.; Koetter, Michael
2011-01-01
Heterogeneity in size and productivity is central to models that explain which manufacturing firms expert. This study presents descriptive evidence on similar heterogeneity among international banks as financial services providers. A novel and detailed bank-level data set reveals the volume and mode
Effects of preload 4 repetition maximum on 100-m sprint times in collegiate women.
Linder, Elizabeth E; Prins, Jan H; Murata, Nathan M; Derenne, Coop; Morgan, Charles F; Solomon, John R
2010-05-01
The purpose of this study was to determine the effects of postactivation potentiation (PAP) on track-sprint performance after a preload set of 4 repetition maximum (4RM) parallel back half-squat exercises in collegiate women. All subjects (n = 12) participated in 2 testing sessions over a 3-week period. During the first testing session, subjects performed the Controlled protocol consisting of a 4-minute standardized warm-up, followed by a 4-minute active rest, a 100-m track sprint, a second 4-minute active rest, finalized with a second 100-m sprint. The second testing session, the Treatment protocol, consisted of a 4-minute standardized warm-up, followed by 4-minute active rest, sprint, a second 4-minute active rest, a warm-up of 4RM parallel back half-squat, a third 9-minute active rest, finalized with a second sprint. The results indicated that there was a significant improvement of 0.19 seconds (p sprint was preceded by a 4RM back-squat protocol during Treatment. The standardized effect size, d, was 0.82, indicating a large effect size. Additionally, the results indicated that it would be expected that mean sprint times would increase 0.04-0.34 seconds (p 0.05). The findings suggest that performing a 4RM parallel back half-squat warm-up before a track sprint will have a positive PAP affect on decreased track-sprint times. Track coaches, looking for the "competitive edge" (PAP effect) may re-warm up their sprinters during meets.
Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs
Long Wan
2015-01-01
Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Productivity response of calcareous nannoplankton to Eocene Thermal Maximum 2 (ETM2
M. Dedert
2012-05-01
Full Text Available The Early Eocene Thermal Maximum 2 (ETM2 at ~53.7 Ma is one of multiple hyperthermal events that followed the Paleocene-Eocene Thermal Maximum (PETM, ~56 Ma. The negative carbon excursion and deep ocean carbonate dissolution which occurred during the event imply that a substantial amount (10^{3} Gt of carbon (C was added to the ocean-atmosphere system, consequently increasing atmospheric CO_{2}(pCO_{2}. This makes the event relevant to the current scenario of anthropogenic CO_{2} additions and global change. Resulting changes in ocean stratification and pH, as well as changes in exogenic cycles which supply nutrients to the ocean, may have affected the productivity of marine phytoplankton, especially calcifying phytoplankton. Changes in productivity, in turn, may affect the rate of sequestration of excess CO_{2} in the deep ocean and sediments. In order to reconstruct the productivity response by calcareous nannoplankton to ETM2 in the South Atlantic (Site 1265 and North Pacific (Site 1209, we employ the coccolith Sr/Ca productivity proxy with analysis of well-preserved picked monogeneric populations by ion probe supplemented by analysis of various size fractions of nannofossil sediments by ICP-AES. The former technique of measuring Sr/Ca in selected nannofossil populations using the ion probe circumvents possible contamination with secondary calcite. Avoiding such contamination is important for an accurate interpretation of the nannoplankton productivity record, since diagenetic processes can bias the productivity signal, as we demonstrate for Sr/Ca measurements in the fine (<20 μm and other size fractions obtained from bulk sediments from Site 1265. At this site, the paleoproductivity signal as reconstructed from the Sr/Ca appears to be governed by cyclic changes, possibly orbital forcing, resulting in a 20–30% variability in Sr/Ca in dominant genera as obtained by ion probe. The ~13 to 21
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
Maximum holding endurance time: Effects of load and load's center of gravity height.
Lee, Tzu-Hsien
2015-01-01
Manual holding task is a potential risk to the development of musculoskeletal injuries since it is prone to induce localized muscle fatigue. Maximum holding endurance time is a significant parameter for the design of manual holding task. This study aimed to examine the effects of load and load's COG height on maximum holding endurance time. Fifteen young and healthy males were recruited as participants. A factorial design was used to examine the effects of load and load's COG height on maximum holding endurance time. Four levels of load (15% , 30% , 45% and 60% of the participant's maximum holding capacity) and two levels of load's COG height in box (0 cm and 40 cm high from the handle position) were examined. Maximum holding endurance time decreased with increasing load and/or increasing load's COG height. The effect of load's COG height on maximum holding endurance time decreased with increasing load. Load, load's COG height, and the interaction of load and load's COG height significantly affected maximum holding endurance time. Practitioners should realize the effects of load, load's COG height, and the interaction of load and load's COG height on maximum holding endurance time when setting the working conditions of holding tasks.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
马志浩; 毛良斌; 葛进平; 崔波
2012-01-01
This experimental research is based on agenda melding hypothesis, and discusses the attribute agenda-settings derived from different media channels. It shows that different sizes of group impose various influences on the attribute agenda-settings, as well as on audiences＇ reliance of information. It suggests some modifications on agenda melding hypothesis and questions for further research.%本研究基于议程融合的假设，通过实验的方法对受众在不同媒介信息获取渠道中产生的属性议程设置进行探讨，验证了不同大小的群体规模对属性议程设置效果的不同影响，以及在议程融合过程中群体规模对受众的不同信息依赖程度的倾向的影响。该实验研究对议程融合假设做出了一定的修正并提出了一些值得进一步研究的问题。
Maximum entropy approach to fuzzy control
Ramer, Arthur; Kreinovich, Vladik YA.
1992-01-01
For the same expert knowledge, if one uses different &- and V-operations in a fuzzy control methodology, one ends up with different control strategies. Each choice of these operations restricts the set of possible control strategies. Since a wrong choice can lead to a low quality control, it is reasonable to try to loose as few possibilities as possible. This idea is formalized and it is shown that it leads to the choice of min(a + b,1) for V and min(a,b) for &. This choice was tried on NASA Shuttle simulator; it leads to a maximally stable control.
Speech processing using maximum likelihood continuity mapping
Hogden, John E. (Santa Fe, NM)
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
Hogden, J.E.
2000-04-18
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Decomposition of spectra using maximum autocorrelation factors
Larsen, Rasmus
2001-01-01
into classification or regression type analyses. A featured method for low dimensional representation of multivariate datasets is Hotellings principal components transform. We will extend the use of principal components analysis incorporating new information into the algorithm. This new information consists......This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes...... Fourier decomposition these new variables are located in frequency as well as well wavelength. The proposed algorithm is tested on 100 samples of NIR spectra of wheat....
Maximum speeds and alpha angles of flowing avalanches
McClung, David; Gauer, Peter
2016-04-01
A flowing avalanche is one which initiates as a slab and, if consisting of dry snow, will be enveloped in a turbulent snow dust cloud once the speed reaches about 10 m/s. A flowing avalanche has a dense core of flowing material which dominates the dynamics by serving as the driving force for downslope motion. The flow thickness typically on the order of 1 -10 m which is on the order of about 1% of the length of the flowing mass. We have collected estimates of maximum frontal speed um (m/s) from 118 avalanche events. The analysis is given here with the aim of using the maximum speed scaled with some measure of the terrain scale over which the avalanches ran. We have chosen two measures for scaling, from McClung (1990), McClung and Schaerer (2006) and Gauer (2012). The two measures are the √H0-;√S0-- (total vertical drop; total path length traversed). Our data consist of 118 avalanches with H0 (m)estimated and 106 with S0 (m)estimated. Of these, we have 29 values with H0 (m),S0 (m)and um (m/s)estimated accurately with the avalanche speeds measured all or nearly all along the path. The remainder of the data set includes approximate estimates of um (m/s)from timing the avalanche motion over a known section of the path where approximate maximum speed is expected and with either H0or S0or both estimated. Our analysis consists of fitting the values of um/√H0--; um/√S0- to probability density functions (pdf) to estimate the exceedance probability for the scaled ratios. In general, we found the best fits for the larger data sets to fit a beta pdf and for the subset of 29, we found a shifted log-logistic (s l-l) pdf was best. Our determinations were as a result of fitting the values to 60 different pdfs considering five goodness-of-fit criteria: three goodness-of-fit statistics :K-S (Kolmogorov-Smirnov); A-D (Anderson-Darling) and C-S (Chi-squared) plus probability plots (P-P) and quantile plots (Q-Q). For less than 10% probability of exceedance the results show that
A Simulated Annealing Algorithm for Maximum Common Edge Subgraph Detection in Biological Networks
Larsen, Simon; Alkærsig, Frederik G.; Ditzel, Henrik
2016-01-01
introduce a heuristic algorithm for the multiple maximum common edge subgraph problem that is able to detect large common substructures shared across multiple, real-world size networks efficiently. Our algorithm uses a combination of iterated local search, simulated annealing and a pheromone...
S. R. Verkulich
2013-01-01
Full Text Available The interstadial marine deposits stratum was described in the Fildes Peninsula (King George Island due to field and laboratory investigations during 2008–2011. The stratum fragments occur in the west and north-west parts of peninsula in following forms: sections of soft sediments, containing fossil shells, marine algae, bones of marine animals and rich marine diatom complexes in situ (11 sites; fragments of shells and bones on the surface (25 sites. According to the results of radiocarbon dating, these deposits were accumulated within the period 19–50 ky BP. Geographical and altitude settings of the sites, age characteristics, taxonomy of fossil flora and fauna, and good safety of the soft deposits stratum allow to make following conclusions: during interstadial, sea water covered significant part of King George Island up to the present altitude of 40 m a.s.l., and the King George Island glaciation had smaller size then; environmental conditions for the interstadial deposit stratum accumulation were at least not colder than today; probably, the King George island territory was covered entirely by ice masses of Last glacial maximum not earlier than 19 ky BP; during Last glacial maximum, King George Island was covered by thin, «cold», not mobile glaciers, which contribute to conservation of the soft marine interstadial deposits filled with fossil flora and fauna.
Bernstein, Eric F; Civiok, Jennifer M
2013-12-01
Laser beam diameter affects the depth of laser penetration. Q-switched lasers tend to have smaller maximum spot sizes than other dermatologic lasers, making beam diameter a potentially more significant factor in treatment outcomes. To compare the clinical effect of using the maximum-size treatment beam available for each delivered fluence during laser tattoo removal to a standard 4-mm-diameter treatment beam. Thirteen tattoos were treated in 12 subjects using a Q-switched Nd:YAG laser equipped with a treatment beam diameter that was adjustable in 1 mm increments and a setting that would enable the maximally achievable diameter ("MAX-ON" setting) with any fluence. Tattoos were randomly bisected and treated on one side with the MAX-ON setting and on the contralateral side with a standard 4-mm-diameter spot ("MAX-OFF" setting). Photographs were taken 8 weeks following each treatment and each half-tattoo was evaluated for clearance on a 10-point scale by physicians blinded to the treatment conditions. Tattoo clearance was greater on the side treated with the MAX-ON setting in a statistically significant manner following the 1st through 4th treatments, with the MAX-OFF treatment site approaching the clearance of the MAX-ON treatment site after the 5th and 6th treatments. This high-energy, Q-switched Nd:YAG laser with a continuously variable spot-size safely and effectively removes tattoos, with greater removal when using a larger spot-size. © 2013 Wiley Periodicals, Inc.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
The Last Glacial Maximum experiment in PMIP4-CMIP6
Kageyama, Masa; Braconnot, Pascale; Abe-Ouchi, Ayako; Harrison, Sandy; Lambert, Fabrice; Peltier, W. Richard; Tarasov, Lev
2016-04-01
The Last Glacial Maximum (LGM), around 21,000 years ago, is a cold climate extreme. As such, it has been the focus of many studies on modelling and climate reconstruction, which have brought knowledge on the mechanisms explaining this climate, in terms of climate on the continents and of the ocean state, and in terms relationships between climate changes over land, ice sheets and oceans. It is still a challenge for climate or Earth System models to represent the amplitude of climate changes for this period, under the following forcings: - Ice sheets, which represent perturbations in land surface type, altitude and land/ocean distribution - Atmospheric composition - Astronomical parameters Feedbacks from the vegetation and dust are also known to have played a role in setting up the LGM climate but have not been accounted for in previous PMIP experiments. In this poster, we will present the experimental set-up of the PMIP4 LGM experiment, which is presently being discussed and will be finalized for March 2016. For more information and discussion of the PMIP4-CMIP6 experimental design, please visit: https://wiki.lsce.ipsl.fr/pmip3/doku.php/pmip3:cmip6:design:index
Neuromuscular determinants of maximum walking speed in well-functioning older adults.
Clark, David J; Manini, Todd M; Fielding, Roger A; Patten, Carolynn
2013-03-01
Maximum walking speed may offer an advantage over usual walking speed for clinical assessment of age-related declines in mobility function that are due to neuromuscular impairment. The objective of this study was to determine the extent to which maximum walking speed is affected by neuromuscular function of the lower extremities in older adults. We recruited two groups of healthy, well functioning older adults who differed primarily on maximum walking speed. We hypothesized that individuals with slower maximum walking speed would exhibit reduced lower extremity muscle size and impaired plantarflexion force production and neuromuscular activation during a rapid contraction of the triceps surae muscle group (soleus (SO) and gastrocnemius (MG)). All participants were required to have usual 10-meter walking speed of >1.0m/s. If the difference between usual and maximum 10m walking speed was 0.6m/s, the individual was assigned to the "Faster" group (n=12). Peak rate of force development (RFD) and rate of neuromuscular activation (rate of EMG rise) of the triceps surae muscle group were assessed during a rapid plantarflexion movement. Muscle cross sectional area of the right triceps surae, quadriceps and hamstrings muscle groups was determined by magnetic resonance imaging. Across participants, the difference between usual and maximal walking speed was predominantly dictated by maximum walking speed (r=.85). We therefore report maximum walking speed (1.76 and 2.17m/s in Slower and Faster, ptriceps surae (p=.44), quadriceps (p=.76) and hamstrings (p=.98). MG rate of EMG rise was positively associated with RFD and maximum 10m walking speed, but not the usual 10m walking speed. These findings support the conclusion that maximum walking speed is limited by impaired neuromuscular force and activation of the triceps surae muscle group. Future research should further evaluate the utility of maximum walking speed for use in clinical assessment to detect and monitor age
Approximating maximum clique with a Hopfield network.
Jagota, A
1995-01-01
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.
A polynomial algorithm for abstract maximum flow
McCormick, S.T. [Univ. of British Columbia, Vancouver, British Columbia (Canada)
1996-12-31
Ford and Fulkerson`s original 1956 max flow/min cut paper formulated max flow in terms of flows on paths, rather than the more familiar flows on arcs. In 1974 Hoffman pointed out that Ford and Fulkerson`s original proof was quite abstract, and applied to a wide range of max flow-like problems. In this abstract model we have capacitated elements, and linearly ordered subsets of elements called paths. When two paths share an element ({open_quote}cross{close_quote}), then there must be a path that is a subset of the first path up to the cross, and a subset of the second path after the cross. (Hoffman`s generalization of) Ford and Fulkerson`s proof showed that the max flow/min cut theorem still holds under this weak assumption. However, this proof is non-constructive. To get an algorithm, we assume that we have an oracle whose input is an arbitrary subset of elements, and whose output is either a path contained in that subset, or the statement that no such path exists. We then use complementary slackness to show how to augment any feasible set of path flows to a set with a strictly larger total flow value using a polynomial number of calls to the oracle. Then standard scaling techniques yield an overall polynomial algorithm for finding both a max flow and a min cut. Hoffman`s paper actually considers a sort of supermodular objective on the path flows, which allows him to include transportation problems and thus rain-cost flow in his frame-work. We also discuss extending our algorithm to this more general case.
U.S. Geological Survey, Department of the Interior — This data set represents the average monthly maximum temperature in Celsius multiplied by 100 for 2002 compiled for every catchment of NHDPlus for the conterminous...
Fuzzy sets, rough sets, multisets and clustering
Dahlbom, Anders; Narukawa, Yasuo
2017-01-01
This book is dedicated to Prof. Sadaaki Miyamoto and presents cutting-edge papers in some of the areas in which he contributed. Bringing together contributions by leading researchers in the field, it concretely addresses clustering, multisets, rough sets and fuzzy sets, as well as their applications in areas such as decision-making. The book is divided in four parts, the first of which focuses on clustering and classification. The second part puts the spotlight on multisets, bags, fuzzy bags and other fuzzy extensions, while the third deals with rough sets. Rounding out the coverage, the last part explores fuzzy sets and decision-making.
Dr.Pranita Goswami
2011-01-01
The Partial Fuzzy Set is a portion of the Fuzzy Set which is again a Fuzzy Set. In the Partial Fuzzy Set the baseline is shifted from 0 to 1 to any of its α cuts . In this paper we have fuzzified a portion of the Fuzzy Set by transformation
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
A mini-exhibition with maximum content
Laëtitia Pedroso
2011-01-01
The University of Budapest has been hosting a CERN mini-exhibition since 8 May. While smaller than the main travelling exhibition it has a number of major advantages: its compact design alleviates transport difficulties and makes it easier to find suitable venues in the Member States. Its content can be updated almost instantaneously and it will become even more interactive and high-tech as time goes by. The exhibition on display in Budapest. The purpose of CERN's new mini-exhibition is to be more interactive and easier to install. Due to its size, the main travelling exhibition cannot be moved around quickly, which is why it stays in the same country for 4 to 6 months. But this means a long waiting list for the other Member States. To solve this problem, the Education Group has designed a new exhibition, which is smaller and thus easier to install. Smaller maybe, but no less rich in content, as the new exhibition conveys exactly the same messages as its larger counterpart. However, in the slimm...
Safe inductive power transmission to millimeter-sized implantable microelectronics devices.
Ibrahim, Ahmed; Kiani, Mehdi
2015-08-01
Power transfer efficiency (PTE) and power delivered to the load (PDL) are key inductive link design parameters for powering millimeter-sized implants. While several groups have suggested increasing the power carrier frequency (fp) of inductive links to 100s of MHz to maximize PTE, we have demonstrated that operating at 10s of MHz offers higher allowable PDL under the safety absorption rate (SAR) constraints. We have proposed a closed-form power function that relates maximum power levels that can safely be transferred at different frequencies under the SAR constraints. Three sets of inductive links at different frequencies of 50 MHz, 200 MHz, and 400 MHz have been optimized for powering a 1 mm(3)-sized implant. We have shown in simulations that reducing fp from 200 MHz to 50 MHz along with shrinking the size of the transmitter coil results in ~7.8 times higher PDL under SAR constraints, at the cost of only 52% drop in PTE.
On Some Proximity Problems of Colored Sets
范成林; 罗军; 王文成; 钟发荣; 朱滨海
2014-01-01
The maximum diameter color-spanning set problem (MaxDCS) is defined as follows: given n points with m colors, select m points with m distinct colors such that the diameter of the set of chosen points is maximized. In this paper, we design an optimal O(n log n) time algorithm using rotating calipers for MaxDCS in the plane. Our algorithm can also be used to solve the maximum diameter problem of imprecise points modeled as polygons. We also give an optimal algorithm for the all farthest foreign neighbor problem (AFFN) in the plane, and propose algorithms to answer the farthest foreign neighbor query (FFNQ) of colored sets in two- and three-dimensional space. Furthermore, we study the problem of computing the closest pair of color-spanning set (CPCS) in d-dimensional space, and remove the log m factor in the best known time bound if d is a constant.
Analyzing flow anisotropies with excursion sets in relativistic heavy-ion collisions
Mohapatra, Ranjita K; Srivastava, Ajit M
2011-01-01
We show that flow anisotropies in relativistic heavy-ion collisions can be analyzed using a certain technique of shape analysis of excursion sets recently proposed by us for CMBR fluctuations to investigate anisotropic expansion history of the universe. The technique analyzes shapes (sizes) of patches above (below) certain threshold value for transverse energy/particle number (the excursion sets) as a function of the azimuthal angle and rapidity. Modeling flow by imparting extra anisotropic momentum to the momentum distribution of particles from HIJING, we compare the resulting distributions for excursion sets at two different azimuthal angles. Angles with maximum difference in the two distributions identify the event plane, and the magnitude of difference in the two distributions relates to the magnitude of momentum anisotropy, i.e. elliptic flow.
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-07
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.
Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man
2009-10-01
In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.
Investigation on the Maximum Power Point in Solar Panel Characteristics Due to Irradiance Changes
Abdullah, M. A.; Fauziah Toha, Siti; Ahmad, Salmiah
2017-03-01
One of the disadvantages of the photovoltaic module as compared to other renewable resources is the dynamic characteristics of solar irradiance due to inconsistency weather condition and surrounding temperature. Commonly, a photovoltaic power generation systems consist of an embedded control system to maximize the power generation due to the inconsistency in irradiance. In order to improve the simplicity of the power optimization control, this paper present the characteristic of Maximum Power Point with various irradiance levels for Maximum Power Point Tracking (MPPT). The technique requires a set of data from photovoltaic simulation model to be extrapolated as a standard relationship between irradiance and maximum power. The result shows that the relationship between irradiance and maximum power can be represented by a simplified quadratic equation. The first section in your paper
Municipal Size and Electoral Participation
Mouritzen, Poul Erik; Rose, Lawrence; Denters, Bas
placed on the implications of size for the character and vitality of local democracy. This paper summarizes findings from a comparative research project which has sought to redress this imbalance by means of undertaking a closer inspection of relationships between municipal size and a set of indicators...... regarding the character of local democracy in four European countries, Switzerland, Norway, Denmark and the Netherlands. The investigation draws upon cross-section interview data collected by means of a nested sample design consistent with the hierarchical nature of the issues involved. Empirical analyses...
Computational complexity of some maximum average weight problems with precedence constraints
Faigle, Ulrich; Kern, Walter
1994-01-01
Maximum average weight ideal problems in ordered sets arise from modeling variants of the investment problem and, in particular, learning problems in the context of concepts with tree-structured attributes in artificial intelligence. Similarly, trying to construct tests with high reliability leads t
Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee
2011-01-01
of the maximum lifetime routing problem that considers the operation modes of the node. Solution of the linear programming gives the upper analytical bound for the network lifetime. In order to illustrate teh application of the optimization model, we solved teh problem for different parameter settings...
van de Plassche EJ; Polder MD; Canton JH
1992-01-01
In this report Maximum Permissible Concentrations (MPC) are derived for 9 trace metals based on ecotoxicological data. The elements are: antimony, barium, beryllium, cobalt, molybdenum, selenium, thallium, tin, and vanadium The study was carried out in the framework of the project "Setting int
The NFL Combine 40-Yard Dash: How Important is Maximum Velocity?
Clark, Kenneth P; Rieger, Randall H; Bruno, Richard F; Stearne, David J
2017-06-22
This investigation analyzed the sprint velocity profiles for athletes who completed the 40-yard (36.6m) dash at the 2016 NFL Combine. The purpose was to evaluate the relationship between maximum velocity and sprint performance, and to compare acceleration patterns for fast and slow athletes. Using freely available online sources, data were collected for body mass and sprint performance (36.6m time with split intervals at 9.1 and 18.3m). For each athlete, split times were utilized to generate modeled curves of distance vs. time, velocity vs. time, and velocity vs. distance using a mono-exponential equation. Model parameters were used to quantify acceleration patterns as the ratio of maximum velocity to maximum acceleration (vmax / amax, or τ). Linear regression was used to evaluate the relationship between maximum velocity and sprint performance for the entire sample. Additionally, athletes were categorized into fast and slow groups based on maximum velocity, with independent t-tests and effect size statistics used to evaluate between-group differences in sprint performance and acceleration patterns. Results indicated that maximum velocity was strongly correlated with sprint performance across 9.1m, 18.3m, and 36.6m (r of 0.72, 0.83, and 0.94, respectively). However, both fast and slow groups accelerated in a similar pattern relative to maximum velocity (τ = 0.768 ± 0.068s for the fast group and τ = 0.773 ± 0.070s for the slow group). We conclude that maximum velocity is of critical importance to 36.6m time, and inclusion of more maximum velocity training may be warranted for athletes preparing for the NFL Combine.
Pristipomoides filamentosus Size at Maturity Study
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains information used to help determine median size at 50% maturity for the bottomfish species, Pristipomoides filamentosus in the Main Hawaiian...
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Volumetric Concentration Maximum of Cohesive Sediment in Waters: A Numerical Study
Jisun Byun
2014-12-01
Full Text Available Cohesive sediment has different characteristics compared to non-cohesive sediment. The density and size of a cohesive sediment aggregate (a so-called, floc continuously changes through the flocculation process. The variation of floc size and density can cause a change of volumetric concentration under the condition of constant mass concentration. This study investigates how the volumetric concentration is affected by different conditions such as flow velocity, water depth, and sediment suspension. A previously verified, one-dimensional vertical numerical model is utilized here. The flocculation process is also considered by floc in the growth type flocculation model. Idealized conditions are assumed in this study for the numerical experiments. The simulation results show that the volumetric concentration profile of cohesive sediment is different from the Rouse profile. The volumetric concentration decreases near the bed showing the elevated maximum in the cases of both current and oscillatory flow. The density and size of floc show the minimum and the maximum values near the elevation of volumetric concentration maximum, respectively. This study also shows that the flow velocity and the critical shear stress have significant effects on the elevated maximum of volumetric concentration. As mechanisms of the elevated maximum, the strong turbulence intensity and increased mass concentration are considered because they cause the enhanced flocculation process. This study uses numerical experiments. To the best of our knowledge, no laboratory or field experiments on the elevated maximum have been carried out until now. It is of great necessity to conduct well-controlled laboratory experiments in the near future.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
Physics-based estimates of maximum magnitude of induced earthquakes
Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin
2016-04-01
In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein
2001-02-01
The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with conventional landfills. This is the highest methane recovery rate per unit waste, and thus progress toward stabilization, documented anywhere for such a large waste mass. This high recovery rate is attributed to moisture, and elevated temperature attained inexpensively during startup. Economic analyses performed under Phase I of this NETL contract indicate ''greenhouse cost effectiveness'' to be excellent. Other benefits include substantial waste volume loss (over 30%) which translates to extended landfill life. Other environmental benefits include rapidly improved quality and stabilization (lowered pollutant levels) in liquid leachate which drains from the waste.
Paddle River Dam : review of probable maximum flood
Clark, D. [UMA Engineering Ltd., Edmonton, AB (Canada); Neill, C.R. [Northwest Hydraulic Consultants Ltd., Edmonton, AB (Canada)
2008-07-01
The Paddle River Dam was built in northern Alberta in the mid 1980s for flood control. According to the 1999 Canadian Dam Association (CDA) guidelines, this 35 metre high, zoned earthfill dam with a spillway capacity sized to accommodate a probable maximum flood (PMF) is rated as a very high hazard. At the time of design, it was estimated to have a peak flow rate of 858 centimetres. A review of the PMF in 2002 increased the peak flow rate to 1,890 centimetres. In light of a 2007 revision of the CDA safety guidelines, the PMF was reviewed and the inflow design flood (IDF) was re-evaluated. This paper discussed the levels of uncertainty inherent in PMF determinations and some difficulties encountered with the SSARR hydrologic model and the HEC-RAS hydraulic model in unsteady mode. The paper also presented and discussed the analysis used to determine incremental damages, upon which a new IDF of 840 m{sup 3}/s was recommended. The paper discussed the PMF review, modelling methodology, hydrograph inputs, and incremental damage of floods. It was concluded that the PMF review, involving hydraulic routing through the valley bottom together with reconsideration of the previous runoff modeling provides evidence that the peak reservoir inflow could reasonably be reduced by approximately 20 per cent. 8 refs., 5 tabs., 8 figs.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...