Maximum-likelihood cluster recontruction
Bartelmann, M; Seitz, S; Schneider, P J; Bartelmann, Matthias; Narayan, Ramesh; Seitz, Stella; Schneider, Peter
1996-01-01
We present a novel method to recontruct the mass distribution of galaxy clusters from their gravitational lens effect on background galaxies. The method is based on a least-chisquare fit of the two-dimensional gravitational cluster potential. The method combines information from shear and magnification by the cluster lens and is designed to easily incorporate possible additional information. We describe the technique and demonstrate its feasibility with simulated data. Both the cluster morphology and the total cluster mass are well reproduced.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
AN INVERSE MAXIMUM CAPACITY PATH PROBLEM WITH LOWER BOUND CONSTRAINTS
杨超; 陈学旗
2002-01-01
The computational complexity of inverse mimimum capacity path problem with lower bound on capacity of maximum capacity path is examined, and it is proved that solution of this problem is NP-complete. A strong polynomial algorithm for a local optimal solution is provided.
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Estimating landscape carrying capacity through maximum clique analysis.
Donovan, Therese M; Warrington, Gregory S; Schwenk, W Scott; Dinitz, Jeffrey H
2012-12-01
Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Negative heat capacity of sodium clusters
Reyes-Nava, J A; Michaelian, K; Reyes-Nava, Juan A.; Garzon, Ignacio L.; Michaelian, Karo
2003-01-01
Heat capacities of Na_N, N = 13, 20, 55, 135, 142, and 147, clusters have been investigated using a many-body Gupta potential and microcanonical molecular dynamics simulations. Negative heat capacities around the cluster melting-like transition have been obtained for N = 135, 142, and 147, but the smaller clusters (N = 13, 20, and 55) do not show this peculiarity. By performing a survey of the cluster potential energy landscape (PEL), it is found that the width of the distribution function of the kinetic energy and the spread of the distribution of potential energy minima (isomers), are useful features to determine the different behavior of the heat capacity as a function of the cluster size. The effect of the range of the interatomic forces is studied by comparing the heat capacities of the Na_55 and Cd_55 clusters. It is shown that by decreasing the range of the many-body interaction, the distribution of isomers characterizing the PEL is modified appropriately to generate a negative heat capacity in the Cd_...
Maximum capacities of the 100-B water plant
Strand, N.O.
1953-04-27
Increases in process water flows will be needed as the current program of increasing pile power levels continues. The future process water flows that will be required are known to be beyond the present maximum capacities of component parts of the water system. It is desirable to determine the present maximum capacities of each major component part so that plans can be mode for modifications and/or additions to the present equipment to meet future required flows. The apparent hydraulic limit of the present piles is about 68,000 gpm. This figure is based on a tube inlet pressure of 400 psi, a tube flow of 34 gpm, and 2,000 effective tubes. In this document the results of tests and calculations to determine the present maximum capacities of each major component part of the 100-B water system will be presented. Emergency steam operated pumps will not be considered as it is doubtful of year around operation of a steam driven pump could be economically justified. Some possible ways to increase the process water flows of each component part of the water system to the ultimate of 68,000 gpm are given.
Maximum work configurations of finite potential capacity reservoir chemical engines
无
2010-01-01
An isothermal endoreversible chemical engine operating between the finite potential capacity high-chemical-potential reservoir and the infinite potential capacity low-chemical-potential reservoir has been studied in this work.Optimal control theory was applied to determine the optimal cycle configurations corresponding to the maximum work output per cycle for the fixed total cycle time and a universal mass transfer law.Analyses of special examples showed that the optimal cycle configuration with the mass transfer law g∝△μ,where△μis the chemical potential difference,is an isothermal endoreversible chemical engine cycle,in which the chemical potential(or the concentration) of the key component in the working substance of low-chemical-potential side is a constant,while the chemical potentials(or the concentrations) of the key component in the finite potential capacity high-chemical-potential reservoir and the corresponding side working substance change nonlinearly with time,and the difference of the chemical potentials(or the ratio of the concentrations) of the key component between the high-chemical-potential reservoir and the working substance is a constant.While the optimal cycle configuration with the mass transfer law g∝△μc,where △μc is the concentration difference,is different from that with the mass transfer law g∝△μ significantly.When the high-chemical-potential reservoir is also an infinite potential capacity chemical potential reservoir,the optimal cycle configuration of the isothermal endoreversible chemical engine consists of two constant chemical potential branches and two instantaneous constant mass-flux branches,which is independent of the mass transfer law.The object studied in this paper is general,and the results can provide some guidelines for optimal design and operation of real chemical engines.
On the Maximum Storage Capacity of the Hopfield Model
Folli, Viola; Leonetti, Marco; Ruocco, Giancarlo
2017-01-01
Recurrent neural networks (RNN) have traditionally been of great interest for their capacity to store memories. In past years, several works have been devoted to determine the maximum storage capacity of RNN, especially for the case of the Hopfield network, the most popular kind of RNN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns (P) exceeds a fraction (≈ 14%) of the network size N. In this paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite N. We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for P ≫ N. We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns (P) that are stored in the network by appropriately fixing the connection weights. When P≫N and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of RNN with high storage capacity and able to retrieve the desired pattern without distortions. PMID:28119595
A Clustering Method Based on the Maximum Entropy Principle
Edwin Aldana-Bobadilla
2015-01-01
Full Text Available Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method.
Gieles, M; Bastian, N; Stein, I; Gieles, Mark; Larsen, Soeren; Bastian, Nate; Stein, Ilaan
2005-01-01
We introduce a method to relate a possible truncation of the star cluster mass function at the high mass end to the shape of the cluster luminosity function (LF). We compare the observed LFs of five galaxies containing young star clusters with synthetic cluster population models with varying initial conditions. The LF of the SMC, the LMC and NGC 5236 are characterized by a power-law behavior NdL~L^-a dL, with a mean exponent of = 2.0 +/- 0.2. This can be explained by a cluster population formeda with a constant cluster formation rate, in which the maximum cluster mass per logarithmic age bin is determined by the size-of-sample effect and therefore increases with log(age/yr). The LFs of NGC 6946 and M51 are better described by a double power-law distribution or a Schechter function. When a cluster population has a mass function that is truncated below the limit given by the size-of-sample effect, the total LF shows a bend at the magnitude of the maximum mass, with the age of the oldest cluster in the populati...
Power optimization for maximum channel capacity in MIMO relay system
无
2007-01-01
Introducing multiple-input multiple-output (MIMO) relay channel could offer significant capacity gain.And it is of great importance to develop effective power allocation strategies to achieve power efficiency and improve channel capacity in amplify-and-forward relay system.This article investigates a two-hop MIMO relay system with multiple antennas in relay node (RN) and receiver (RX).Maximizing capacity with antenna selection (MCAS) and maximizing capacity with eigen-decomposition (MCED) schemes are proposed to efficiently allocate power among antennas in RN under first and second hop limited scenarios.The analysis and simulation results show that both MCED and MCAS can improve the channel capacity compared with uniform power allocation (UPA) scheme in most of the studied areas.The MCAS bears comparison with MCED with an acceptable capacity loss, but lowers the complexity by saving channel state information (CSI) feedback to the transmitter (TX).Moreover, when the RN is close to RX, the performance of UPA is also close to the upper bound as the performance of first hop is limited.
Maximum Flow in Planar Networks with Exponentially Distributed Arc Capacities.
1984-12-01
avoid constructing the dual, are described in Itai and Shiloach P 97 91. In this paper, we consider the maximum flow problem in (st) planar networks...use arc e and lies completely below P. If no such path exists we say P(e) - *. An algorithm tc construct P(e) given P and e is described in Itai and...suggested in Ford and Fulkerson [1956], developed in Berge and Ghouila-Houri [1962] and its time complexity is reduced to 0( IVI log IVI ) by Itai and
A MODEL FOR THE "MAXIMUM CAPACITY" OF ROOMS OR OF SPACE
Zi-yan WU
2003-01-01
A discrete optimum mathematical model to derive the "maximum capacity" of people in a roomor in a space used for public gatherings is developed. There are two outcomes in the model. One isfocused on whether the person farthest from exits can escape from the room. The other concentrateson the evacuation time of all the people in the room. According to the results of the two outcomes, amore reasonable "maximum capacity" can be worked out in a simple way.
Magic Numbers for Classical Lennard-Jones Cluster Heat Capacities
Frantz, D D
1994-01-01
Heat capacity curves as functions of temperature for classical atomic clusters bound by pairwise Lennard-Jones potentials were calculated for aggregate sizes from 4 to 24 using Monte Carlo methods. J-walking (or jump-walking) was used to overcome convergence difficulties due to quasi-ergodicity in the solid-liquid transition region. The heat capacity curves were found to differ markedly and nonmonotonically as functions of cluster size. Curves for N = 4, 5 and 8 consisted of a smooth, featureless, monotonic increase throughout the transition region, while curves for N = 7 and 15-17 showed a distinct shoulder in this region; the remaining clusters had distinguishable transition heat capacity peaks. The size and location of these peaks exhibited "magic number" behavior, with the most pronounced peaks occurring for magic number sizes of N = 13, 19 and 23. A comparison of the heat capacities with other cluster properties in the solid-liquid transition region that have been reported in the literature indicates par...
Maximum-entropy clustering algorithm and its global convergence analysis
ZHANG; Zhihua
2001-01-01
［1］Bezdek, J. C., Pattern Recognition with Fuzzy Objective Function Algorithm. New York: Plenum, 1981.［2］Krishnapuram, R., Keller, J., A possibilistic approach to clustering, IEEE Trans. on Fuzzy Systems, 1993, 1(2): 98.［3］Yair, E., Zeger, K., Gersho, A., Competitive learning and soft competition for vector quantizer design, IEEE Trans on Signal Processing, 1992, 40(2): 294.［4］Pal, N. R., Bezdek, J. C., Tsao, E. C. K., Generalized clustering networks and Kohonen's self-organizing scheme, IEEE Trans on Neural Networks, 1993, 4(4): 549.［5］Karayiannis, N. B., Bezdek, J. C., Pal, N. R. et al., Repair to GLVQ: a new family of competitive learning schemes, IEEE Trans on Neural Networks, 1996, 7(5): 1062.［6］Karayiannis, N. B., Pai, P. I., Fuzzy algorithms for learning vector quantization, IEEE Trans. on Neural Networks, 1996, 7(5): 1196.［7］Karayiannis, N. B., A methodology for constructing fuzzy algorithms for learning vector quantization, IEEE Trans. on Neural Networks, 1997, 8(3): 505.［8］Karayiannis, N. B., Bezdek, J. C., An integrated approach to fuzzy learning vector quantization and fuzzy C-Means clustering, IEEE Trans. on Fuzzy Systems, 1997, 5(4): 622.［9］Li Xing-si, An efficient approach to nonlinear minimax problems, Chinese Science Bulletin? 1992, 37(10): 802.［10］Li Xing-si, An efficient approach to a class of non-smooth optimization problems, Science in China, Series A,1994, 37(3): 323.［11］. Zangwill, W., Non-linear Programming: A Unified Approach, Englewood Cliffs: Prentice-Hall, 1969.［12］. Fletcher, R., Practical Methods of Optimization,2nd ed., New York: John Wiley & Sons, 1987.［13］. Zhang Zhihua, Zheng Nanning, Wang Tianshu, Behavioral analysis and improving of generalized LVQ neural network, Acta Automatica Sinica, 1999, 25(5): 582.［14］. Kirkpatrick, S., Gelatt, C. D., Vecchi, M. P., Optimization by simulated annealing, Science, 1983, 220(3): 671.［15］. Ross, K., Deterministic annealing for
STUDY ON MAXIMUM HYDROGEN CAPACITY FOR Zr-Ni AMORPHOUS ALLOY
无
2000-01-01
To design the amorphous hydrogen storage alloy efficiently, the maximum hydrogen capacities for Zr-Ni amorphous alloy were calculated. Based on the Rhomb Unit Structure Model(RUSM) for amorphous alloy and the experimental result that hydrogen atoms exist in 3Zr1Ni and 4Zr tetrahedron interstices in Zr-Ni amorphous alloy, the numbers of 3Zr-1Ni and 4Zr tetrahedron interstices in a RUSM were calculated which correspond to the hydrogen capacity. The two extremum Zr distribution states were calculated, such as highly heterogeneous Zr distribution and homogeneous Zr distribution. The calculated curves of hydrogen capacity with different Zr contents at two states indicate that the hydrogen capacity increases with increasing Zr content and reaches its maximum when Zr is 75%. The theoretical maximum hydrogen capacity for Zr-Ni amorphous alloy is 2.0(H/M). Meanwhile, the hydrogen capacity of heterogeneous Zr distribution alloy is higher than that of homogenous one at the same Zr content. The experimental results prove the calculated results reasonable, and accordingly, the experimental results that the distribution of Zr atom in amorphous alloy occur heterogeneous after a few hydrogen absorption-desorption cycles can be explained.
Camarrone, Flavio; Ivanova, Anna; Decoster, Wivine; de Jong, Felix; van Hulle, Marc M
2015-01-01
To examine whether the minimum as well as the maximum voice intensity (i.e. sound pressure level, SPL) curves of a voice range profile (VRP) are required when discovering different voice groups based on a clustering analysis. In this approach, no a priori labeling of voice types is used. VRPs of 194 (84 male and 110 female) professional singers were registered and processed. Cluster analysis was performed with the use of features related to (1) both the maximum and minimum SPL curves and (2) the maximum SPL curve only. Features related to the maximum as well as the minimum SPL curves showed three clusters in both male and female voices. These clusters, or voice groups, are based on voice types with similar VRP features. However, when using features related only to the maximum SPL curve, the clusters became less obvious. Features related to the maximum and minimum SPL curves of a VRP are both needed in order to identify the three voice clusters. © 2016 S. Karger AG, Basel.
The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis
Chen Yidong
2004-01-01
Full Text Available An unsupervised data clustering method, called the local maximum clustering (LMC method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the -mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999.
Lihui Guo
2015-01-01
Full Text Available With the increasing penetration of wind power, the randomness and volatility of wind power output would have a greater impact on safety and steady operation of power system. In allusion to the uncertainty of wind speed and load demand, this paper applied box set robust optimization theory in determining the maximum allowable installed capacity of wind farm, while constraints of node voltage and line capacity are considered. Optimized duality theory is used to simplify the model and convert uncertainty quantities in constraints into certainty quantities. Under the condition of multi wind farms, a bilevel optimization model to calculate penetration capacity is proposed. The result of IEEE 30-bus system shows that the robust optimization model proposed in the paper is correct and effective and indicates that the fluctuation range of wind speed and load and the importance degree of grid connection point of wind farm and load point have impact on the allowable capacity of wind farm.
Tri-Laboratory Linux Capacity Cluster 2007 SOW
Seager, M
2007-03-22
). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.
Abhishek Khanna
2012-01-01
Full Text Available We revisit the problem of optimal power extraction in four-step cycles (two adiabatic and two heat-transfer branches when the finite-rate heat transfer obeys a linear law and the heat reservoirs have finite heat capacities. The heat-transfer branch follows a polytropic process in which the heat capacity of the working fluid stays constant. For the case of ideal gas as working fluid and a given switching time, it is shown that maximum work is obtained at Curzon-Ahlborn efficiency. Our expressions clearly show the dependence on the relative magnitudes of heat capacities of the fluid and the reservoirs. Many previous formulae, including infinite reservoirs, infinite-time cycles, and Carnot-like and non-Carnot-like cycles, are recovered as special cases of our model.
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
A Load Balancing Algorithm Based on Maximum Entropy Methods in Homogeneous Clusters
Long Chen
2014-10-01
Full Text Available In order to solve the problems of ill-balanced task allocation, long response time, low throughput rate and poor performance when the cluster system is assigning tasks, we introduce the concept of entropy in thermodynamics into load balancing algorithms. This paper proposes a new load balancing algorithm for homogeneous clusters based on the Maximum Entropy Method (MEM. By calculating the entropy of the system and using the maximum entropy principle to ensure that each scheduling and migration is performed following the increasing tendency of the entropy, the system can achieve the load balancing status as soon as possible, shorten the task execution time and enable high performance. The result of simulation experiments show that this algorithm is more advanced when it comes to the time and extent of the load balance of the homogeneous cluster system compared with traditional algorithms. It also provides novel thoughts of solutions for the load balancing problem of the homogeneous cluster system.
Ghiyasvand Mehdi
2016-01-01
Full Text Available In this paper, a new problem on a directed network is presented. Let D be a feasible network such that all arc capacities are equal to U. Given a t > 0, the network D with arc capacities U - t is called the t-network. The goal of the problem is to compute the largest t such that the t-network is feasible. First, we present a weakly polynomial time algorithm to solve this problem, which runs in O(log(nU maximum flow computations, where n is the number of nodes. Then, an O(m2n time approach is presented, where m is the number of arcs. Both the weakly and strongly polynomial algorithms are inspired by McCormick and Ervolina (1994.
Yang, Hui; Zhu, Xiaoxu; Bai, Wei; Zhao, Yongli; Zhang, Jie; Liu, Zhu; Zhou, Ziguan; Ou, Qinghai
2016-09-01
Virtualization is considered to be a promising solution to support various emerging applications. This paper illustrates the problem of virtual mapping from a new perspective, and mainly focuses on survivable mapping of virtual networks and the potential trade-off between spectral resource usage effectiveness and failure resilience level. We design an optimum shared protection mapping (OSPM) scheme in elastic optical networks. A differentiable maximum shared capacity of each frequency slot is defined to more efficiently shared protection resource. In order to satisfy various assessment standards, a metric called ambiguity similitude is defined for the first time to give insight on the optimizing difficulty. Simulation results are presented to compare the outcome of the novel OSPM algorithm with traditional dedicated link protection and maximum shared protection mapping. By synthetic analysis, OSPM outperforms the other two schemes in terms of striking a perfect balance among blocking probability, resources utilization, protective success rate, and spectrum redundancy.
Hydrophilic carbon clusters as therapeutic, high capacity antioxidants
Samuel, Errol L. G.; Duong, MyLinh T.; Bitner, Brittany R.; Marcano, Daniela C.; Tour, James M.; Kent, Thomas A.
2014-01-01
Oxidative stress reflects an excessive accumulation of reactive oxygen species (ROS) and is a hallmark of several acute and chronic human pathologies. While many antioxidants have been investigated, the majority have demonstrated poor efficacy in clinical trials. Here, we discuss limitations of current antioxidants and describe a new class of nanoparticle antioxidants, poly(ethylene glycol)-functionalized hydrophilic carbon clusters (PEG-HCCs). PEG-HCCs show high capacity to annihilate ROS such as superoxide and hydroxyl radicals, show no reactivity toward nitric oxide, and can be functionalized with targeting moieties without loss of activity. Given these properties, we propose that PEG-HCCs offer an exciting new area of study for treatment of numerous ROS-induced human pathologies. PMID:25175886
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks †
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-01
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays. PMID:28098750
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks
Zhaowei Wang
2017-01-01
Full Text Available Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs, and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks.
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-13
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs' demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Impact of Maximum Allowable Cost on CO2 Storage Capacity in Saline Formations.
Mathias, Simon A; Gluyas, Jon G; Goldthorpe, Ward H; Mackay, Eric J
2015-11-17
Injecting CO2 into deep saline formations represents an important component of many greenhouse-gas-reduction strategies for the future. A number of authors have posed concern over the thousands of injection wells likely to be needed. However, a more important criterion than the number of wells is whether the total cost of storing the CO2 is market-bearable. Previous studies have sought to determine the number of injection wells required to achieve a specified storage target. Here an alternative methodology is presented whereby we specify a maximum allowable cost (MAC) per ton of CO2 stored, a priori, and determine the corresponding potential operational storage capacity. The methodology takes advantage of an analytical solution for pressure build-up during CO2 injection into a cylindrical saline formation, accounting for two-phase flow, brine evaporation, and salt precipitation around the injection well. The methodology is applied to 375 saline formations from the U.K. Continental Shelf. Parameter uncertainty is propagated using Monte Carlo simulation with 10 000 realizations for each formation. The results show that MAC affects both the magnitude and spatial distribution of potential operational storage capacity on a national scale. Different storage prospects can appear more or less attractive depending on the MAC scenario considered. It is also shown that, under high well-injection rate scenarios with relatively low cost, there is adequate operational storage capacity for the equivalent of 40 years of U.K. CO2 emissions.
Bohui Zhu
2013-01-01
Full Text Available This paper presents a novel maximum margin clustering method with immune evolution (IEMMC for automatic diagnosis of electrocardiogram (ECG arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias.
Maximum jaw opening capacity in adolescents in relation to general joint mobility.
Westling, L; Helkimo, E
1992-09-01
Mandibular jaw opening was related with general joint mobility in a non-patient adolescent group. The angular rotation of the mandible at maximum jaw opening was slightly larger in females than in males and significantly larger in hypermobile individuals. No significant relationship between linear measuring of maximal mandibular opening capacity and peripheral joint mobility was found either at active (AROM) or at passive range of mandibular opening (PROM). PROM was strongly correlated to the mandibular length. Clinical signs in the great jaw closer muscles could not be associated to decreased AROM. The mean value of the difference between PROM-AROM (DPA) was 1.2 mm. Frequent clenching and/or grinding was correlated to increased DPA only in hypermobile adolescents (r = 0.49***). Those with DPA exceeding 5mm had all reciprocal clicking.
P. Heydari
2016-02-01
Full Text Available Background: The maximum aerobic capacity (VO2max can be used to evaluate the cardio-pulmonary condition and to provide physiological balance between a person and his job. Objectives: The aim of this study was to estimate the maximum aerobic capacity and its associated factors among students of medical emergencies in Qazvin. Methods: This cross-sectional study was conducted in 36 male students of medical emergencies in Qazvin University of Medical Sciences, 2015. The Physical Activity Readiness Questionnaire (PAR-Q and demographic questionnaire were completed by the participants. The participants meeting the inclusion criteria were assessed using the Gerkin treadmill protocol. Data were analyzed using Mann-Whitney U test and Kruskal-Wallis. Findings: Mean maximum aerobic capacity was 1.94±0.27 L/min. The maximum aerobic capacity was associated with weight and height groups. There was significant positive correlation between maximal aerobic capacity and height, weight and body mass index. Conclusion: The Gerkin treadmill test is useful for estimation of the maximum aerobic capacity and the maximum working ability in students of medical emergencies.
L. T. Murray
2013-09-01
Full Text Available The oxidative capacity of past atmospheres is highly uncertain. We present here a new climate-biosphere-chemistry modeling framework to determine oxidant levels in the present and past troposphere. We use the GEOS-Chem chemical transport model driven by meteorological fields from the NASA Goddard Institute of Space Studies (GISS ModelE, with land cover and fire emissions from dynamic global vegetation models. We present time-slice simulations for the present day, late preindustrial (AD 1770, and the Last Glacial Maximum (LGM; 19–23 ka, and we test the sensitivity of model results to uncertainty in lightning and fire emissions. We find that most preindustrial and paleo climate simulations yield reduced oxidant levels relative to the present day. Contrary to prior studies, tropospheric mean OH in our ensemble shows little change at the LGM relative to the preindustrial (0.5 ± 12%, despite large reductions in methane concentrations. We find a simple linear relationship between tropospheric mean ozone photolysis rates, water vapor, and total emissions of NOx and reactive carbon that explains 72% of the variability in global mean OH in 11 different simulations across the last glacial-interglacial time interval and the Industrial Era. Key parameters controlling the tropospheric oxidative capacity over glacial-interglacial periods include overhead stratospheric ozone, tropospheric water vapor, and lightning NOx emissions. Variability in global mean OH since the LGM is insensitive to fire emissions. Our simulations are broadly consistent with ice-core records of Δ17O in sulfate and nitrate at the LGM, and CO, HCHO, and H2O2 in the preindustrial. Our results imply that the glacial-interglacial changes in atmospheric methane observed in ice cores are predominantly driven by changes in its sources as opposed to its sink with OH.
Chaos control of ferroresonance system based on RBF-maximum entropy clustering algorithm
Liu Fan [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China)]. E-mail: liufan2003@yahoo.com.cn; Sun Caixin [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China); Sima Wenxia [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China); Liao Ruijin [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China); Guo Fei [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China)
2006-09-11
With regards to the ferroresonance overvoltage of neutral grounded power system, a maximum-entropy learning algorithm based on radial basis function neural networks is used to control the chaotic system. The algorithm optimizes the object function to derive learning rule of central vectors, and uses the clustering function of network hidden layers. It improves the regression and learning ability of neural networks. The numerical experiment of ferroresonance system testifies the effectiveness and feasibility of using the algorithm to control chaos in neutral grounded system.
Giese, Heiner; Azizan, Amizon; Kümmel, Anne; Liao, Anping; Peter, Cyril P; Fonseca, João A; Hermann, Robert; Duarte, Tiago M; Büchs, Jochen
2014-02-01
In biotechnological screening and production, oxygen supply is a crucial parameter. Even though oxygen transfer is well documented for viscous cultivations in stirred tanks, little is known about the gas/liquid oxygen transfer in shake flask cultures that become increasingly viscous during cultivation. Especially the oxygen transfer into the liquid film, adhering on the shake flask wall, has not yet been described for such cultivations. In this study, the oxygen transfer of chemical and microbial model experiments was measured and the suitability of the widely applied film theory of Higbie was studied. With numerical simulations of Fick's law of diffusion, it was demonstrated that Higbie's film theory does not apply for cultivations which occur at viscosities up to 10 mPa s. For the first time, it was experimentally shown that the maximum oxygen transfer capacity OTRmax increases in shake flasks when viscosity is increased from 1 to 10 mPa s, leading to an improved oxygen supply for microorganisms. Additionally, the OTRmax does not significantly undermatch the OTRmax at waterlike viscosities, even at elevated viscosities of up to 80 mPa s. In this range, a shake flask is a somehow self-regulating system with respect to oxygen supply. This is in contrary to stirred tanks, where the oxygen supply is steadily reduced to only 5% at 80 mPa s. Since, the liquid film formation at shake flask walls inherently promotes the oxygen supply at moderate and at elevated viscosities, these results have significant implications for scale-up.
Benkhelifa, Fatma
2013-04-01
In this letter, we study the ergodic capacity of a maximum ratio combining (MRC) Rician fading channel with full channel state information (CSI) at the transmitter and at the receiver. We focus on the low Signal-to-Noise Ratio (SNR) regime and we show that the capacity scales as L ΩK+L SNRx log(1SNR), where Ω is the expected channel gain per branch, K is the Rician fading factor, and L is the number of diversity branches. We show that one-bit CSI feedback at the transmitter is enough to achieve this capacity using an on-off power control scheme. Our framework can be seen as a generalization of recently established results regarding the fading-channels capacity characterization in the low-SNR regime. © 2012 IEEE.
Bahrami Hamid Reza
2007-01-01
Full Text Available The ergodic capacity of MIMO frequency-flat and -selective channels depends greatly on the eigenvalue distribution of spatial correlation matrices. Knowing the eigenstructure of correlation matrices at the transmitter is very important to enhance the capacity of the system. This fact becomes of great importance in MIMO wireless systems where because of the fast changing nature of the underlying channel, full channel knowledge is difficult to obtain at the transmitter. In this paper, we first investigate the effect of eigenvalues distribution of spatial correlation matrices on the capacity of frequency-flat and -selective channels. Next, we introduce a practical scheme known as linear precoding that can enhance the ergodic capacity of the channel by changing the eigenstructure of the channel by applying a linear transformation. We derive the structures of precoders using eigenvalue decomposition and linear algebra techniques in both cases and show their similarities from an algebraic point of view. Simulations show the ability of this technique to change the eigenstructure of the channel, and hence enhance the ergodic capacity considerably.
Ar Rasyid Shadiqin
2013-11-01
Full Text Available This study is aimed to compare the blood cholesterol profile, before and after the measurement of maximum aerobic capacity (VO2max in the students of Jurusan Pendidikan Olahraga dan Kesehatan (JPOK pada Fakultas Keguruan dan Ilmu Pendidikan (FKIP Universitas Lambung Mangkurat Banjarmasin.Variables in this study consist of lipid profiles, including total cholesterol, high density lipoprotein (HDL, low density lipoprotein (LDL, triglyceride (TG and Maximum Aerobic Capacity (VO2max. The concept of VO2max according to Kent(1994:268: “maximum oxygen volume consumed per minute to show total work capacity, or volume per minute relative to body weight (ml/kg.min”. Operationally, VO2max referred in this study is the maximum volume of oxygen that can be consumed per minute, as measured at progressive run (Bleep Test.The method used in this study is pre-experimental with one group pretest-posttest design. This design implies that a group of subjects are treated for a specific period and the measurements are taken both pre and post.The results: There are changes in blood cholesterol profile after the measurement of maximum oxygen capacity (VO2max, shown by significant decrease of total cholesterol variable, increased HDL, and decreased LDL. Changes in triglyceride variable showed no significant decrease despite the statistic differences. Specific HDL sub-class increasing after exercise is a constructive lipoprotein sub-class whereas LDL is destructive lipoproteins sub-class that might damage the body. Therefore, an increase in HDL and decrease in LDL found in this study appears to be advantageous and consequently might alter the risk of coronary heart disease.
BV-capacities on Wiener Spaces and Regularity of the Maximum of the Wiener Process
Trevisan, Dario
2012-01-01
We define a capacity C on abstract Wiener spaces and prove that, for any u with bounded variation, the total variation measure |Du| is absolutely continuous with respect to C: this enables us to extend the usual rules of calculus in many cases dealing with BV functions. As an application, we show that, on the classical Wiener space, the random variable sup_{0\\leqt\\leqT} W_t admits a measure as second derivative, whose total variation measure is singular w.r.t. the Wiener measure.
Assessment of Maximum Aerobic Capacity and Anaerobic Threshold of Elite Ballet Dancers.
Wyon, Matthew A; Allen, Nick; Cloak, Ross; Beck, Sarah; Davies, Paul; Clarke, Frances
2016-09-01
An athlete's cardiorespiratory profile, maximal aerobic capacity, and anaerobic threshold is affected by training regimen and competition demands. The present study aimed to ascertain whether there are company rank differences in maximal aerobic capacity and anaerobic threshold in elite classical ballet dancers. Seventy-four volunteers (M 34, F 40) were recruited from two full-time professional classical ballet companies. All participants completed a continuous incremental treadmill protocol with a 1-km/hr speed increase at the end of each 1-min stage until termination criteria had been achieved (e.g., voluntary cessation, respiratory exchange ratio <1.15, HR ±5 bpm of estimated HRmax). Peak VO2 (5-breathe smooth) was recorded and anaerobic threshold calculated using ventilatory curve and ventilatory equivalents methods. Statistical analysis reported between-subject effects for gender (F1,67=35.18, p<0.001) and rank (F1,67=8.67, p<0.001); post hoc tests reported soloists (39.5±5.15 mL/kg/min) as having significantly lower VO2 peak than artists (45.9±5.75 mL/kg/min, p<0.001) and principal dancers (48.07±3.24 mL/kg/min, p<0.001). Significant differences in anaerobic threshold were reported for age (F1,67=7.68, p=0.008) and rank (F1,67=3.56, p=0.034); post hoc tests reported artists (75.8±5.45%) having significantly lower anaerobic threshold than soloists (80.9±5.71, p<0.01) and principals (84.1±4.84%, p<0.001). The observed differences in VO2 peak and anaerobic threshold between the ranks in ballet companies are probably due to the different rehearsal and performance demands.
FANG Chuanglin; LIU Xiaoli
2010-01-01
Studying the carrying capacity of resources and environment of city clusters in the central China has impor-tant practical guidance significance for promoting the healthy,sustainable and stable development of this region.Ac-cording to their influencing factors and reciprocity mechanism,using system dynamics approaches,this paper built a SD model for measuring the carrying capacity of resources and environment of the city clusters in the central China,and through setting different development models,the comprehensive measurement analysis on the carrying capacity was carried out.The results show that the model of promoting socio-economic development under the protection of resources and environment is the optimal model for promoting the harmony development of resources,environment,society and economy in the city clusters.According to this model,the optimum population scale of the city clusters in2020 is 42.80×106 persons,and the moderate economic development scale is 22.055× 1012 yuan(RMB).In 1996-2020,the carrying capacity of resources and environment in the city clusters took on obvious phase-change characteristics.During the studied period,it is basically at the initial development stage,and will come through the development process from slow development to speedup development.
Ghani, Kay Dora Abd.; Tukiar, Mohd Azuan; Hamid, Nor Hayati Abdul
2017-08-01
Malaysia is surrounded by the tectonic feature of the Sumatera area which consists of two seismically active inter-plate boundaries, namely the Indo-Australian and the Eurasian Plates on the west and the Philippine Plates on the east. Hence, Malaysia experiences tremors from far distant earthquake occurring in Banda Aceh, Nias Island, Padang and other parts of Sumatera Indonesia. In order to predict the safety of precast buildings in Malaysia under near field ground motion the response spectrum analysis could be used for dealing with future earthquake whose specific nature is unknown. This paper aimed to develop of capacity demand response spectrum subject to Design Basis Earthquake (DBE) and Maximum Considered Earthquake (MCE) in order to assess the performance of precast beam column joint. From the capacity-demand response spectrum analysis, it can be concluded that the precast beam-column joints would not survive when subjected to earthquake excitation with surface-wave magnitude, Mw, of more than 5.5 Scale Richter (Type 1 spectra). This means that the beam-column joint which was designed using the current code of practice (BS8110) would be severely damaged when subjected to high earthquake excitation. The capacity-demand response spectrum analysis also shows that the precast beam-column joints in the prototype studied would be severely damaged when subjected to Maximum Considered Earthquake (MCE) with PGA=0.22g having a surface-wave magnitude of more than 5.5 Scale Richter, or Type 1 spectra.
Brileya, Kristen A; Camilleri, Laura B; Zane, Grant M; Wall, Judy D; Fields, Matthew W
2014-01-01
Sulfate-reducing bacteria (SRB) can interact syntrophically with other community members in the absence of sulfate, and interactions with hydrogen-consuming methanogens are beneficial when these archaea consume potentially inhibitory H2 produced by the SRB. A dual continuous culture approach was used to characterize population structure within a syntrophic biofilm formed by the SRB Desulfovibrio vulgaris Hildenborough and the methanogenic archaeum Methanococcus maripaludis. Under the tested conditions, monocultures of D. vulgaris formed thin, stable biofilms, but monoculture M. maripaludis did not. Microscopy of intact syntrophic biofilm confirmed that D. vulgaris formed a scaffold for the biofilm, while intermediate and steady-state images revealed that M. maripaludis joined the biofilm later, likely in response to H2 produced by the SRB. Close interactions in structured biofilm allowed efficient transfer of H2 to M. maripaludis, and H2 was only detected in cocultures with a mutant SRB that was deficient in biofilm formation (ΔpilA). M. maripaludis produced more carbohydrate (uronic acid, hexose, and pentose) as a monoculture compared to total coculture biofilm, and this suggested an altered carbon flux during syntrophy. The syntrophic biofilm was structured into ridges (∼300 × 50 μm) and models predicted lactate limitation at ∼50 μm biofilm depth. The biofilm had structure that likely facilitated mass transfer of H2 and lactate, yet maximized biomass with a more even population composition (number of each organism) when compared to the bulk-phase community. Total biomass protein was equivalent in lactate-limited and lactate-excess conditions when a biofilm was present, but in the absence of biofilm, total biomass protein was significantly reduced. The results suggest that multispecies biofilms create an environment conducive to resource sharing, resulting in increased biomass retention, or carrying capacity, for cooperative populations.
Kristen Annis Brileya
2014-12-01
Full Text Available Sulfate-reducing bacteria (SRB can interact syntrophically with other community members in the absence of sulfate, and interactions with hydrogen-consuming methanogens are beneficial when these archaea consume potentially inhibitory H2 produced by the SRB. A dual continuous culture approach was used to characterize population structure within a syntrophic biofilm formed by the SRB Desulfovibrio vulgaris Hildenborough and the methanogenic archaeum Methanococcus maripaludis. Under the tested conditions, monocultures of D. vulgaris formed thin, stable biofilms, but monoculture M. maripaludis did not. Microscopy of intact syntrophic biofilm confirmed that D. vulgaris formed a scaffold for the biofilm, while intermediate and steady-state images revealed that M. maripaludis joined the biofilm later, likely in response to H2 produced by the SRB. Close interactions in structured biofilm allowed efficient transfer of H2 to M. maripaludis, and H2 was only detected in cocultures with a mutant SRB that was deficient in biofilm formation ( delta pilA. M. maripaludis produced more carbohydrate (uronic acid, hexose, and pentose as a monoculture compared to total coculture biofilm, and this suggested an altered carbon flux during syntrophy. The syntrophic biofilm was structured into ridges (~300 x 50 um and models predicted lactate limitation at approximately 50 um biofilm depth. The biofilm had structure that likely facilitated mass transfer of H2 and lactate, yet maximized biomass with a more even population composition (number of each organism when compared to the bulk-phase community. Total biomass protein was equivalent in lactate-limited and lactate-excess conditions when a biofilm was present, but in the absence of biofilm, total biomass protein was significantly reduced. The results suggest that multispecies biofilms create an environment conducive to resource sharing, resulting in increased biomass retention, or carrying capacity, for cooperative
Inoue, T.; Yurimoto, H.
2012-12-01
Water is the most important volatile component in the Earth, and affects the physicochemical properties of mantle minerals, e.g. density, elastic property, electrical conductivity, thermal conductivity, rheological property, melting temperature, melt composition, element partitioning, etc. So many high pressure experiments have been conducted so far to determine the effect of water on mantle minerals. To clarify the maximum water storage capacity in nominally anhydrous mantle minerals in the mantle transition zone and lower mantle is an important issue to discuss the possibility of the existence of water reservoir in the Earth mantle. So we have been clarifying the maximum water storage capacity in mantle minerals using MA-8 type (KAWAI-type) high pressure apparatus and SIMS (secondary ion mass spectroscopy). Upper mantle mineral, olivine can contain ~0.9 wt% H2O in the condition just above 410 km discontinuity in maximum (e.g. Chen et al., 2002; Smyth et al., 2006). On the other hand, mantle transition zone mineral, wadsleyite and ringwoodite can contain significant amount (about 2-3 wt.%) of H2O (e.g. Inoue et al., 1995, 1998, 2010; Kawamoto et al., 1996; Ohtani et al., 2000). But the lower mantle mineral, perovskite can not contain significant amount of H2O, less than ~0.1 wt% (e.g. Murakami et al., 2002; Inoue et al., 2010). In addition, garnet and stishovite also can not contain significant amount of H2O (e.g. Katayama et al., 2003; Mookherjee and Karato, 2010; Litasov et al., 2007). On the other hand, the water storage capacities of mantle minerals are supposed to be significantly coupled with Al by a substitution with Mg2+, Si4+ or Mg2+ + Si4+, because Al3+ is the trivalent cation, and H+ is the monovalent cation. To clarify the degree of the substitution, the water contents and the chemical compositions of Al-bearing minerals in the mantle transition zone and the lower mantle were also determined in the Al-bearing systems with H2O. We will introduce the
ZHANG Yang-zhu; HUANG Shun-hong; WAN Da-juan; HUANG Yun-xiang; ZHOU Wei-jun; ZOU Ying-bin
2007-01-01
In order to understand the status of fixed ammonium, fixed ammonium content, maximum capacity of ammonium fixation, and their influencing factors in major types of tillage soils of Hunan Province, China, were studied with sampling on fields, and laboratory incubation and determination. The main results are summarized as follows: (1) Content of fixed ammonium in the tested soils varies greatly with soil use pattern and the nature of parent material. For the paddy soils, it ranges from 135.4 ± 57.4 to 412.8±32.4 mg kg-1, with 304.7±96.7 mg kg-1 in average; while it ranges from 59.4 to 435.7 mg kg-1, with 230.1 ± 89.2 mg kg1 in average for the upland soils. The soils developed from limnic material and slate had higher fixed ammonium content than the soils developed from granite. The percentage of fixed ammonium to total N in the upland soils is always higher than that in the paddy soils. It ranges from 6.1 ± 3.6% to 16.6 ±4.6%, with 14.0% ± 5.1% in average for the paddy soils and it amounted to 5.8 ±2.0% to 40.1 ± 17.8%, with 23.5 ± 14.2% in average for upland soils. (2) The maximum capacity of ammonium fixation has the same trend with the fixed ammonium content in the tested soils. For all the tested soils, the percentage of recently fixed ammonium to maximum capacity of ammonium fixation is always bellow 20% and it may be due to the fact that the soils have high fertility and high saturation of ammonium-fixing site. (3) The clay content and clay composition in the tested soils are the two important factors influe ncing their fixed ammonium content and maximum capacity of ammonium fixation. The results showed that hydrous mica is the main 2:1 type clay mineral in ＜0.02 mm clay of the paddy soils, and its content in 0.02-0.002 mm clay is much higher than that in ＜ 0.002 mm clay of the soils. The statistical analysis showed that both the fixed ammonium content and the maximum capacity of ammonium fixation of the paddy soils were positively correlated with
Melnikov, A. A.; Kostishin, V. G.; Alenkov, V. V.
2016-09-01
Real operating conditions of a thermoelectric cooling device are in the presence of thermal resistances between thermoelectric material and a heat medium or cooling object. They limit performance of a device and should be considered when modeling. Here we propose a dimensionless mathematical steady state model, which takes them into account. Analytical equations for dimensionless cooling capacity, voltage, and coefficient of performance (COP) depending on dimensionless current are given. For improved accuracy a device can be modeled with use of numerical or combined analytical-numerical methods. The results of modeling are in acceptable accordance with experimental results. The case of zero temperature difference between hot and cold heat mediums at which the maximum cooling capacity mode appears is considered in detail. Optimal device parameters for maximal cooling capacity, such as fraction of thermal conductance on the cold side y, fraction of current relative to maximal j' are estimated in range of 0.38-0.44 and 0.48-0.95, respectively, for dimensionless conductance K' = 5-100. Also, a method for determination of thermal resistances of a thermoelectric cooling system is proposed.
Reina-Campos, Marta; Kruijssen, J. M. Diederik
2017-08-01
We present a simple, self-consistent model to predict the maximum masses of giant molecular clouds (GMCs), stellar clusters and high-redshift clumps as a function of the galactic environment. Recent works have proposed that these maximum masses are set by shearing motions and centrifugal forces, but we show that this idea is inconsistent with the low masses observed across an important range of local-Universe environments, such as low-surface density galaxies and galaxy outskirts. Instead, we propose that feedback from young stars can disrupt clouds before the global collapse of the shear-limited area is completed. We develop a shear-feedback hybrid model that depends on three observable quantities: the gas surface density, the epicylic frequency and the Toomre parameter. The model is tested in four galactic environments: the Milky Way, the Local Group galaxy M31, the spiral galaxy M83 and the high-redshift galaxy zC406690. We demonstrate that our model simultaneously reproduces the observed maximum masses of GMCs, clumps and clusters in each of these environments. We find that clouds and clusters in M31 and in the Milky Way are feedback-limited beyond radii of 8.4 and 4 kpc, respectively, whereas the masses in M83 and zC406690 are shear-limited at all radii. In zC406690, the maximum cluster masses decrease further due to their inspiral by dynamical friction. These results illustrate that the maximum masses change from being shear-limited to being feedback-limited as galaxies become less gas rich and evolve towards low shear. This explains why high-redshift clumps are more massive than GMCs in the local Universe.
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Zhou, W.; Qiu, G. Y.; Oodo, S. O.; He, H.
2013-03-01
An increasing interest in wind energy and the advance of related technologies have increased the connection of wind power generation into electrical grids. This paper proposes an optimization model for determining the maximum capacity of wind farms in a power system. In this model, generator power output limits, voltage limits and thermal limits of branches in the grid system were considered in order to limit the steady-state security influence of wind generators on the power system. The optimization model was solved by a nonlinear primal-dual interior-point method. An IEEE-30 bus system with two wind farms was tested through simulation studies, plus an analysis conducted to verify the effectiveness of the proposed model. The results indicated that the model is efficient and reasonable.
Chappell, Mark; Odell, Jason
2004-01-01
We measured maximal oxygen consumption (VO(2max)) and burst speed in populations of Trinidadian guppies (Poecilia reticulata) from contrasting high- and low-predation habitats but reared in "common garden" conditions. We tested two hypothesis: first, that predation, which causes rapid life-history evolution in guppies, also impacts locomotor physiology, and second, that trade-offs would occur between burst and aerobic performance. VO(2max) was higher than predicted from allometry, and resting VO(2) was lower than predicted. There were small interdrainage differences in male VO(2max), but predation did not affect VO(2max) in either sex. Maximum burst speed was correlated with size; absolute burst speed was higher in females, but size-adjusted speed was greater in males. For both sexes, burst speed conformed to allometric predictions. There were differences in burst speed between drainages in females, but predation regime did not affect burst speed in either sex. We did not find a significant correlation between burst speed and VO(2max), suggesting no trade-off between these traits. These results indicate that predation-mediated evolution of guppy life history does not produce concomitant evolution in aerobic capacity and maximum burst speed. However, other aspects of swimming performance (response latencies or acceleration) might show adaptive divergence in contrasting predation regimes.
Achievable capacity design for irregular and clustered high performance mesh networks
Olwal, TO
2012-11-01
Full Text Available and locations of terminal users [10]. Moreover, typical rural based wireless networks can be described by (i) long single hop links, (ii) limited and unreliable energy sources, and (iii) clustered distribution of Internet users [11]. The main problem... constitutes the need to increase capacity of community owned existing wireless broadband networks so that multimedia services can be delivered to remote and rural areas without losing connectivity [2]. Fig. 1: High Performance Node (HPN) TM [10] Fig. 2...
Magic number behavior for heat capacities of medium sized classical Lennard-Jones clusters
Frantz, D D
2001-01-01
Monte Carlo methods were used to calculate heat capacities as functions of temperature for classical atomic clusters of aggregate sizes $25 \\leq N \\leq 60$ that were bound by pairwise Lennard-Jones potentials. The parallel tempering method was used to overcome convergence difficulties due to quasiergodicity in the solid-liquid phase-change regions. All of the clusters studied had pronounced peaks in their heat capacity curves, most of which corresponded to their solid-liquid phase-change regions. The heat capacity peak height and location exhibited two general trends as functions of cluster size: for $N = 25$ to 36, the peak temperature slowly increased, while the peak height slowly decreased, disappearing by $N = 37$; for $N = 30$, a very small secondary peak at very low temperature emerged and quickly increased in size and temperature as $N$ increased, becoming the dominant peak by $N = 36$. Superimposed on these general trends were smaller fluctuations in the peak heights that corresponded to ``magic numbe...
Matthew D. Beekley
2006-07-01
Full Text Available Sumo wrestling is unique in combat sport, and in all of sport. We examined the maximum aerobic capacity and body composition of sumo wrestlers and compared them to untrained controls. We also compared "aerobic muscle quality", meaning VO2max normalized to predicted skeletal muscle mass (SMM (VO2max /SMM, between sumo wrestlers and controls and among previously published data for male athletes from combat, aerobic, and power sports. Sumo wrestlers, compared to untrained controls, had greater (p < 0.05 body mass (mean ± SD; 117.0 ± 4.9 vs. 56.1 ± 9.8 kg, percent fat (24.0 ± 1.4 vs. 13.3 ± 4.5, fat-free mass (88.9 ± 4.2 vs. 48.4 �� 6.8 kg, predicted SMM (48.2 ± 2.9 vs. 20.6 ± 4.7 kg and absolute VO2max (3.6 ± 1.3 vs. 2.5 ± 0.7 L·min-1. Mean VO2max /SMM (ml·kg SMM-1·min-1 was significantly different (p < 0.05 among aerobic athletes (164.8 ± 18.3, combat athletes (which was not different from untrained controls; 131.4 ± 9.3 and 128.6 ± 13.6, respectively, power athletes (96.5 ± 5.3, and sumo wrestlers (71.4 ± 5.3. There was a strong negative correlation (r = - 0.75 between percent body fat and VO2max /SMM (p < 0.05. We conclude that sumo wrestlers have some of the largest percent body fat and fat-free mass and the lowest "aerobic muscle quality" (VO2max /SMM, both in combat sport and compared to aerobic and power sport athletes. Additionally, it appears from analysis of the relationship between SMM and absolute VO2max for all sports that there is a "ceiling" at which increases in SMM do not result in additional increases in absolute VO2max
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2013-01-01
We analyze the relationship between maximum cluster mass, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H_2), neutral gas (Sigma_HI) and star formation rate (Sigma_SFR) in the grand design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. We find for clusters older than 25 Myr that M_3rd, the median of the 5 most massive clusters, is proportional to Sigma_HI^0.4. There is no correlation with Sigma_gas, Sigma_H2, or Sigma_SFR. For clusters younger than 10 Myr, M_3rd is proportional to Sigma_HI^0.6, M_3rd is proportional to Sigma_gas^0.5; there is no correlation with either Sigma_H_2 or Sigma_SFR. The results could hardly be more different than those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but M_3rd is proportional to Sigma_g...
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2012-01-01
We analyze the relationship between maximum cluster mass, M_max, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H2) and star formation rate (Sigma_SFR) in the flocculent galaxy M33, using published gas data and a catalog of more than 600 young star clusters in its disk. By comparing the radial distributions of gas and most massive cluster masses, we find that M_max is proportional to Sigma_gas^4.7, M_max is proportional Sigma_H2^1.3, and M_max is proportional to Sigma_SFR^1.0. We rule out that these correlations result from the size of sample; hence, the change of the maximum cluster mass must be due to physical causes.
Oh, Seungkyung
2012-01-01
We perform the largest currently available set of direct N-body calculations of young star cluster models to study the dynamical influence, especially through the ejections of the most massive star in the cluster, on the current relation between the maximum-stellar-mass and the star-cluster-mass. We vary several initial parameters such as the initial half-mass radius of the cluster, the initial binary fraction, and the degree of initial mass segregation. Two different pairing methods are used to construct massive binaries for more realistic initial conditions of massive binaries. We find that lower mass clusters (= 1000 Msun), no most-massive star escapes the cluster within 3 Myr regardless of the initial conditions if clusters have initial half-mass radii, r_0.5, >= 0.8 pc. However, a few of the initially smaller sized clusters (r_0.5 = 0.3 pc), which have a higher density, eject their most massive star within 3 Myr. If clusters form with a compact size and their massive stars are born in a binary system wit...
Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.
2016-09-01
Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.
风电场最大注入容量的研究%RESEARCH ON CAPACITY OF WIND FARM MAXIMUM POWER INTEGRATION
王湘明; 高杨; 刘丽钧
2012-01-01
In recent years, with the increasing development and application of wind power technology, the proportion of the wind power in power system has grown up, consequently the connecting wind power had impacted much on power system. In this paper, it studies of variable speed constant frequency doubly fed wind turbine, separately join 3-kind-turbine wind farms to IEEE-14. A method combined of steady and transient state has been used to analyse the maximum capacity of power systems, and ensure the wind farm capacity maintained the system stability. Simulation results show that the number of the turbines connecting to the system will be proportional to the its power, thus determine the maximum capacity of the wind farm. The method is used to determine the largest wind farm connecting to power system capacity, thus can guarantee its own wind farm and the system stability.%以变速恒频双馈风电机组为研究对象,对IEEE-14系统分别加入3种不同功率的风力发电机组成的风电场,采用稳态和暂态相结合的方法对最大装机容量进行分析,确定能使系统保持稳定的风电场容量.仿真计算结果表明,不同功率的风机并入系统中的台数与其功率有一定的比例关系,从而确定了风电场的最大容量.利用该方法确定的风电场接入电力系统最大容量,可保证风电场自身及系统运行的稳定性.
Di Cagno, Massimiliano; Styskala, Jakub; Hlaváč, Jan
2011-01-01
Four new 3-hydroxy-quinolinone derivatives with promising anticancer activity could be solubilized using liposomes as vehicle to an extent that allows their in vitro and in vivo testing without use of toxic solvent(s). A screening method to identify the maximum incorporation capacity of hydrophobic...... drugs within liposomes was successfully applied. The compounds and lipid(s) were dissolved in methanol, and the solvent was removed by rotary evaporation. The film was resuspended with phosphate buffer (pH 7.4), and the dispersion was sonicated to reduce vesicle size. Ultracentrifugation was used...
Kuijer, P P F M; van Oostrom, S H; Duijzer, K; van Dieën, J H
2012-01-01
It is unclear whether the maximum acceptable weight of lift (MAWL), a common psychophysical method, reflects joint kinetics when different lifting techniques are employed. In a within-participants study (n = 12), participants performed three lifting techniques--free style, stoop and squat lifting from knee to waist level--using the same dynamic functional capacity evaluation lifting test to assess MAWL and to calculate low back and knee kinetics. We assessed which knee and back kinetic parameters increased with the load mass lifted, and whether the magnitudes of the kinetic parameters were consistent across techniques when lifting MAWL. MAWL was significantly different between techniques (p = 0.03). The peak lumbosacral extension moment met both criteria: it had the highest association with the load masses lifted (r > 0.9) and was most consistent between the three techniques when lifting MAWL (ICC = 0.87). In conclusion, MAWL reflects the lumbosacral extension moment across free style, stoop and squat lifting in healthy young males, but the relation between the load mass lifted and lumbosacral extension moment is different between techniques. Tests of maximum acceptable weight of lift (MAWL) from knee to waist height are used to assess work capacity of individuals with low-back disorders. This article shows that the MAWL reflects the lumbosacral extension moment across free style, stoop and squat lifting in healthy young males, but the relation between the load mass lifted and lumbosacral extension moment is different between techniques. This suggests that standardisation of lifting technique used in tests of the MAWL would be indicated if the aim is to assess the capacity of the low back.
Gonzalez-Lopezlira, Rosa A. [On sabbatical leave from the Centro de Radioastronomia y Astrofisica, UNAM, Campus Morelia, Michoacan, C.P. 58089, Mexico. (Mexico); Pflamm-Altenburg, Jan; Kroupa, Pavel, E-mail: r.gonzalez@crya.unam.mx [Argelander Institut fuer Astronomie, Universitaet Bonn, Auf dem Huegel 71, D-53121 Bonn (Germany)
2013-06-20
We analyze the relationship between maximum cluster mass and surface densities of total gas ({Sigma}{sub gas}), molecular gas ({Sigma}{sub H{sub 2}}), neutral gas ({Sigma}{sub H{sub I}}), and star formation rate ({Sigma}{sub SFR}) in the grand-design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. By comparing the two-dimensional distribution of cluster masses and gas surface densities, we find for clusters older than 25 Myr that M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.4{+-}0.2}}, whereM{sub 3rd} is the median of the five most massive clusters. There is no correlation with{Sigma}{sub gas},{Sigma}{sub H2}, or{Sigma}{sub SFR}. For clusters younger than 10 Myr, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.6{+-}0.1}} and M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 0.5{+-}0.2}; there is no correlation with either {Sigma}{sub H{sub 2}} or{Sigma}{sub SFR}. The results could hardly be more different from those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but we have determined M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 3.8{+-}0.3}, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub 2}{sup 1.2{+-}0.1}}, and M{sub 3rd}{proportional_to}{Sigma}{sub SFR}{sup 0.9{+-}0.1}. For the older sample in M51, the lack of tight correlations is probably due to the combination of strong azimuthal variations in the surface densities of gas and star formation rate, and the cluster ages. These two facts mean that neither the azimuthal average of the surface densities at a given radius nor the surface densities at the present-day location of a stellar cluster represent the true surface densities at the place and time of cluster formation. In the case of the younger sample, even if the clusters have not yet
Shimada, Takae; Kawasaki, Norihiro; Ueda, Yuzuru; Sugihara, Hiroyuki; Kurokawa, Kosuke
This paper aims to clarify the battery capacity required by a residential area with densely grid-connected photovoltaic (PV) systems. This paper proposes a planning method of tomorrow's grid-connection power from/to the external electric power system by using demand power forecasting and insolation forecasting for PV power predictions, and defines a operation method of the electricity storage device to control the grid-connection power as planned. A residential area consisting of 389 houses consuming 2390 MWh/year of electricity with 2390kW PV systems is simulated based on measured data and actual forecasts. The simulation results show that 8.3MWh of battery capacity is required in the conditions of half-hour planning and 1% or less of planning error ratio and PV output limiting loss ratio. The results also show that existing technologies of forecasting reduce required battery capacity to 49%, and increase the allowable installing PV amount to 210%.
Pan, Shu-Yuan; Chiang, Pen-Chi; Chen, Yi-Hung; Chen, Chun-Da; Lin, Hsun-Yu; Chang, E-E
2013-01-01
Accelerated carbonation of basic oxygen furnace slag (BOFS) coupled with cold-rolling wastewater (CRW) was performed in a rotating packed bed (RPB) as a promising process for both CO2 fixation and wastewater treatment. The maximum achievable capture capacity (MACC) via leaching and carbonation processes for BOFS in an RPB was systematically determined throughout this study. The leaching behavior of various metal ions from the BOFS into the CRW was investigated by a kinetic model. In addition, quantitative X-ray diffraction (QXRD) using the Rietveld method was carried out to determine the process chemistry of carbonation of BOFS with CRW in an RPB. According to the QXRD results, the major mineral phases reacting with CO2 in BOFS were Ca(OH)2, Ca2(HSiO4)(OH), CaSiO3, and Ca2Fe1.04Al0.986O5. Meanwhile, the carbonation product was identified as calcite according to the observations of SEM, XEDS, and mappings. Furthermore, the MACC of the lab-scale RPB process was determined by balancing the carbonation conversion and energy consumption. In that case, the overall energy consumption, including grinding, pumping, stirring, and rotating processes, was estimated to be 707 kWh/t-CO2. It was thus concluded that CO2 capture by accelerated carbonation of BOFS could be effectively and efficiently performed by coutilizing with CRW in an RPB.
Borner, Arnaud; Li, Zheng; Levin, Deborah A
2013-02-14
Supersonic expansions to vacuum produce clusters of sufficiently small size that properties such as heat capacities and latent heat of evaporation cannot be described by bulk vapor thermodynamic values. In this work the Monte-Carlo Canonical-Ensemble (MCCE) method was used to provide potential energies and constant-volume heat capacities for small water clusters. The cluster structures obtained using the well-known simple point charge model were found to agree well with earlier simulations using more rigorous potentials. The MCCE results were used as the starting point for molecular dynamics simulations of the evaporation rate as a function of cluster temperature and size which were found to agree with unimolecular dissociation theory and classical nucleation theory. The heat capacities and latent heat obtained from the MCCE simulations were used in direct-simulation Monte-Carlo of two experiments that measured Rayleigh scattering and terminal dimer mole fraction of supersonic water-jet expansions. Water-cluster temperature and size were found to be influenced by the use of kinetic rather than thermodynamic heat-capacity and latent-heat values as well as the nucleation model.
Segmentation Based on Clustering and Maximum Entropy Method%基于空间模式聚类最大熵图像分割算法研究
陈秋红; 沈云琴
2012-01-01
研究图像分割优化问题,在分割图像中,提取信息受到各种因素影响,分割效果不理想.针对图像分割计算复杂,造成图像分割分辨率低,清晰度不高.同时,当图像中的信息量非常大时,图像分割非常耗时.为了有效地分割图像,提出了一种基于空间模式聚类和最大熵算法原理相结合的图像分割方法.首先对图像采用最大熵算法进行图像分割,为每个熵区域定义特征量.根据不同的特征量计算相似区域之间的欧氏距离和空间距离,从而确定像素聚类中心的距离.然后对分割后的图像区域采用基于空间模式聚类方案进行合并,并对图像进行二值化处理.仿真表明与传统图像分割相比,提高了分割效率,分割出的图像边缘效果清晰,证明了算法的可行性和有效性.%The paper studied Image segmentation optimization problem. For the computational complexity and oth er factors, many image segmentation algorithms have low resolution of image segmentation and low clarity. When ima ges contain large amount of information, the image segmentations are very time-consuming]'. In order to effectively segment images, a space model was proposed based on clustering and principle of maximum entropy algorithm. First ly , the maximum entropy algorithm was used for image segmentation, and characteristics were defined for each entro py region. Based on different characteristics, the Euclidean distance and space distance between similar regions were calculated to determine the distance between cluster center pixel. Then, segmented image areas were clustered based on joint space mode, and binarized. Simulation results show that compared with the traditional image segmentation, this image segmentation has clear edge effects, which demonstrates the feasibility and effectiveness of the algorithm.
Gobet, F; Carré, M; Farizon, B; Farizon, M; Gaillard, M J; Maerk, T D; Scheier, P
2002-01-01
By (i) selecting specific decay reactions in high energy collisions (60 keV/amu) of hydrogen cluster ions with a helium target (utilizing event-by-event data of a recently developed multi-coincidence experiment) and by (ii) deriving corresponding temperatures for these microcanonical cluster ensembles (analyzing the respective fragment distributions) we are able to construct caloric curves for ii sub 3 sup + (ii sub 2) sub m cluster ions (6 <= m <= 14). All individual curves and the mean of these curves show a backbending in the plateau region thus constituting direct evidence for a negative microcanonical heat capacity in the liquid-to-gas like transition of these finite systems.
NISHIKAWA, Kazuo; Fujimura, Takashi; Ota, Yasuhiro; Abe, Takuya; ElRamlawy, Kareem Gamal; Nakano, Miyako; Takado, Tomoaki; Uenishi, Akira; Kawazoe, Hidechika; Sekoguchi, Yoshinori; Tanaka, Akihiko; Ono, Kazuhisa; Kawamoto, Seiji
2016-01-01
Background Environmental control to reduce the amount of allergens in a living place is thought to be important to avoid sensitization to airborne allergens. However, efficacy of environmental control on inactivation of airborne allergens is not fully investigated. We have previously reported that positively- and negatively-charged plasma cluster ions (PC-ions) reduce the IgE-binding capacity of crude allergens from Japanese cedar pollen as important seasonal airborne allergens. Cat (Felis do...
影响内贸煤炭船最大载货能力的因素研究%Study on Influence Factors of Domestic Coal Ship Maximum Cargo Capacity
王威
2015-01-01
Under the influence of domestic coal ship navigation area, seasonal, oil-water reserves, con-stant, ship ballast water storage and other factors, the change of large cargo capacity, the paper gives the cal-culation method of domestic coal ship maximum cargo capacity, analysis of the factors affecting the domestic coal ship maximum cargo capacity and determination method, is of guiding significance in practical work.%内贸煤炭船受航区、季节、油水储备量、船舶常数、压载水存量等因素的影响，其载货能力变化较大。文章给出了计算内贸煤炭船最大载货能力的方法，分析了影响内贸煤炭船最大载货能力的各项因素及确定方法，对实际工作有指导意义。
北京电网风电发展与消纳能力%Development and Maximum Accommodating Capacity of Wind Power in Beijing Power Grid
余潇潇; 张璞; 刘兆燕; 左向红; 张凯; 田子婵
2015-01-01
结合北京地区风力资源分布情况及风力发电的并网现状，对北京地区风力发电的发展情况进行了预测。预测内容包括规划风力发电的输出特性，以及“十三五”期间北京地区风力发电的发展情况。提出了一种以电网的负荷特性、常规电源调峰能力、新能源处理特性及外受电力交换情况作为边界条件的风电消纳计算方法。运用该方法对北京电网“十二五”末及“十三五”末对风电的消纳能力进行了计算，并提出了促进北京电网风电发展的相关技术措施。%According to the distribution of wind energy resource and the present situation of wind power integration in Beijing, the forecast of wind power generation development in Beijing power grid was provided, which focused on the output characteristics of the planned wind power projects and the development of wind power generation in Beijing during the 13th national five￣year plan. A calculation method of the maximum proliferation ratio of wind power in Beijing power grid was proposed, whose boundary condition included the load characteristics of grid, the peak shift capability of local power generation plants, the new energy processing features and the power flow exchange limit with the outside grid. The method was used to calculate the maximum penetration ratio of wind power in Beijing power grid during the end of the 12th, 13th national five￣year plan. Finally, this paper suggested some related technical measures to promote the development of wind power in Beijing power grid.
Yuan, Jing; Li, Guo-xue; Zhang, Hong-yu; Luo, Yi-ming
2013-09-01
It is necessary to achieve the optimization for MSW logistics based on the new Xicheng (combining the former Xicheng and the former Xuanwu districts) and the new Dongcheng (combining the former Dongcheng and the former Chongwen districts) districts of Beijing. Based on the analysis of current MSW logistics system, transfer station's processing capacity and the terminal treatment facilities' conditions of the four former districts and other districts, a MSW logistics system was built by GIS methods considering transregional treatment. This article analyzes the MSW material balance of current and new logistics systems. Results show that the optimization scheme could reduce the MSW collection distance of the new Xicheng and the new Dongcheng by 9.3 x 10(5) km x a(-1), reduced by 10% compared with current logistics. Under the new logistics solution, considering transregional treatment, can reduce landfill treatment of untreated MSW about 28.3%. If the construction of three incineration plants finished based on the new logistics, the system's optimal ratio of incineration: biochemical treatment: landfill can reach 3.8 : 4.5 : 1.7 compared with 1 : 4.8 : 4.2, which is the ratio of current MSW logistics. The ratio of the amount of incineration: biochemical treatment: landfill approximately reach 4 : 3 : 3 which is the target for 2015. The research results are benefit in increasing MSW utilization and reduction rate of the new Dongcheng and Xicheng districts and nearby districts.
De Kauwe, Martin G; Lin, Yan-Shih; Wright, Ian J; Medlyn, Belinda E; Crous, Kristine Y; Ellsworth, David S; Maire, Vincent; Prentice, I Colin; Atkin, Owen K; Rogers, Alistair; Niinemets, Ülo; Serbin, Shawn P; Meir, Patrick; Uddling, Johan; Togashi, Henrique F; Tarvainen, Lasse; Weerasinghe, Lasantha K; Evans, Bradley J; Ishida, F Yoko; Domingues, Tomas F
2016-05-01
Simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax ). Estimating this parameter using A-Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci ) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat ) measurements, from which Vcmax can be extracted using a 'one-point method'. We used a global dataset of A-Ci curves (564 species from 46 field sites, covering a range of plant functional types) to test the validity of an alternative approach to estimate Vcmax from Asat via this 'one-point method'. If leaf respiration during the day (Rday ) is known exactly, Vcmax can be estimated with an r(2) value of 0.98 and a root-mean-squared error (RMSE) of 8.19 μmol m(-2) s(-1) . However, Rday typically must be estimated. Estimating Rday as 1.5% of Vcmax, we found that Vcmax could be estimated with an r(2) of 0.95 and an RMSE of 17.1 μmol m(-2) s(-1) . The one-point method provides a robust means to expand current databases of field-measured Vcmax , giving new potential to improve vegetation models and quantify the environmental drivers of Vcmax variation.
Brdareski Zorica
2012-01-01
Full Text Available Background/Aim. Regular physical activity and exercise improves quality of life and possibly reduces risk of disease relapse and prolongs survival in breast cancer survivors. The aim of this study was to evaluate the impact of a 3-week moderate intensity aerobic training, on aerobic capacity (VO2max in breast cancer survivors. Methods. A prospective, randomized clinical study included 18 female breast cancer survivors in stage I-IIIA, in which the primary treatment was accomplished at least 3 months before the study inclusion. In all the patients VO2max was estimated using the Astrand’s protocol on a bicycle-ergometer (before and after 3 weeks of training, while subjective assessment of exertion during training were estimated by the Category-Ratio RPE Scale. Each workout lasted 21 minutes: 3 minutes for warmup and cool-down each and 15 min of full training, 2 times a week. The workload in the group E1 was predefined at the level of 45% to 65% of individual VO2max, and in the group E2 it was based on subjective evaluation of exertion, at the level marked 4-6. Data on the subjective feeling of exertion were collected after each training course in both groups. Results. We recorded a statistically significant improvement in VO2max in both groups (E1 - 11.86%; E2 - 17.72%, with no significant differences between the groups. The workload level, determined by the percent of VO2max, was different between the groups E1 and E2 (50.47 ± 7.02% vs 55.58 ± 9.58%, as well as subjective perception of exertion (in the groups E1 and E2, 11.6% and 41.6% of training, respectively, was graded in the mark 6. Conclusion. In our group of breast cancer survivors, a 3-week moderate intensity aerobic training significantly improved the level of VO2max.
Tønnessen, Espen; Shalfawi, Shaher A I; Haugen, Thomas; Enoksen, Eystein
2011-09-01
The purpose of this study was to examine the effect of 10 weeks' 40-m repeated sprint training program that does not involve strength training on sprinting speed and repeated sprint speed on young elite soccer players. Twenty young well-trained elite male soccer players of age (±SD) 16.4 (±0.9) years, body mass 67.2 (±9.1) kg, and stature 176.3 (±7.4) cm volunteered to participate in this study. All participants were tested on 40-m running speed, 10 × 40-m repeated sprint speed, 20-m acceleration speed, 20-m top speed, countermovement jump (CMJ), and aerobic endurance (beep test). Participants were divided into training group (TG) (n = 10) and control group (CG) (n = 10). The study was conducted in the precompetition phase of the training program for the participants and ended 13 weeks before the start of the season; the duration of the precompetition period was 26 weeks. The TG followed a Periodized repeated sprint training program once a week. The training program consisted of running 40 m with different intensities and duration from week to week. Within-group results indicate that TG had a statistically marked improvement in their performance from pre to posttest in 40-m maximum sprint (-0.06 seconds), 10 × 40-m repeated sprint speed (-0.12 seconds), 20- to 40-m top speed (-0.05 seconds), and CMJ (2.7 cm). The CG showed only a statistically notable improvement from pre to posttest in 10 × 40-m repeated sprint speed (-0.06 seconds). Between-group differences showed a statistically marked improvement for the TG over the CG in 10 × 40-m repeated sprint speed (-0.07 seconds) and 20- to 40-m top speed (-0.05 seconds), but the effect of the improvement was moderate. The results further indicate that a weekly training with repeated sprint gave a moderate but not statistically marked improvement in 40-m sprinting, CMJ, and beep test. The results of this study indicate that the repeated sprint program had a positive effect on several of the parameters tested
动态约束下的风电场最大可接入容量研究%Research on maximum access capacity of wind farm based on dynamic constraints
张俊; 晁勤; 段晓田; 袁铁江
2011-01-01
基于DIgSILENT/Power Factory仿真平台建立了新疆某地区含风电场的电力系统仿真模型,以电压和频率两个电气量作为动态约束条件,通过对风电场的穿透功率极限的计算和仿真,确定了该地区电网的风电场最大可接入容量.研究结果表明:采用频率约束法计算和时域仿真分析相结合确定的风电场最大可接入容量,能保证风电系统的稳定运行;并且优化影响风电系统稳定运行的因素和不同约束条件,对确定系统的风电场最大可接入容量和风电场的设计、运行和规划都有重要意义.%In this paper, a simulation model of power system with wind farm in some area of Xinjiang is established based on the DIgSILENT/Power Factory simulation platform. The wind power penetration limit is calculated and simulated with voltage and frequency being considered as dynamic constraints, and then the maximum access capacity of wind power plant of Xinjiang is determined. Research results show that the maximum access capacity of wind farm that determined by the combination of the frequency restriction and the time simulation can ensure the stability of wind power system. Besides, the optimization of some factors and other security constraints play a significant role in determining the maximum access capacity of wind farm and its designing,operating and planning.This work is supported by National Natural Science Foundation of China (No. 50667002).
Alan D Dangour
2011-04-01
Full Text Available BACKGROUND: Ageing is associated with increased risk of poor health and functional decline. Uncertainties about the health-related benefits of nutrition and physical activity for older people have precluded their widespread implementation. We investigated the effectiveness and cost-effectiveness of a national nutritional supplementation program and/or a physical activity intervention among older people in Chile. METHODS AND FINDINGS: We conducted a cluster randomized factorial trial among low to middle socioeconomic status adults aged 65-67.9 years living in Santiago, Chile. We randomized 28 clusters (health centers into the study and recruited 2,799 individuals in 2005 (~100 per cluster. The interventions were a daily micronutrient-rich nutritional supplement, or two 1-hour physical activity classes per week, or both interventions, or neither, for 24 months. The primary outcomes, assessed blind to allocation, were incidence of pneumonia over 24 months, and physical function assessed by walking capacity 24 months after enrollment. Adherence was good for the nutritional supplement (~75%, and moderate for the physical activity intervention (~43%. Over 24 months the incidence rate of pneumonia did not differ between intervention and control clusters (32.5 versus 32.6 per 1,000 person years respectively; risk ratio = 1.00; 95% confidence interval 0.61-1.63; p = 0.99. In intention-to-treat analysis, after 24 months there was a significant difference in walking capacity between the intervention and control clusters (mean difference 33.8 meters; 95% confidence interval 13.9-53.8; p = 0.001. The overall cost of the physical activity intervention over 24 months was US$164/participant; equivalent to US$4.84/extra meter walked. The number of falls and fractures was balanced across physical activity intervention arms and no serious adverse events were reported for either intervention. CONCLUSIONS: Chile's nutritional supplementation program for
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Sims Margaret
2011-11-01
Full Text Available Abstract Background Childhood mental health problems are highly prevalent, experienced by one in five children living in socioeconomically disadvantaged families. Although childcare settings, including family day care are ideal to promote children's social and emotional wellbeing at a population level in a sustainable way, family day care educators receive limited training in promoting children's mental health. This study is an exploratory wait-list control cluster randomised controlled trial to test the appropriateness, acceptability, cost, and effectiveness of "Thrive," an intervention program to build the capacity of family day care educators to promote children's social and emotional wellbeing. Thrive aims to increase educators' knowledge, confidence and skills in promoting children's social and emotional wellbeing. Methods/Design This study involves one family day care organisation based in a low socioeconomic area of Melbourne. All family day care educators (term used for registered carers who provide care for children for financial reimbursement in the carers own home are eligible to participate in the study. The clusters for randomisation will be the fieldworkers (n = 5 who each supervise 10-15 educators. The intervention group (field workers and educators will participate in a variety of intervention activities over 12 months, including workshops; activity exchanges with other educators; and focused discussion about children's social and emotional wellbeing during field worker visits. The control group will continue with their normal work practice. The intervention will be delivered to the intervention group and then to the control group after a time delay of 15 months post intervention commencement. A baseline survey will be conducted with all consenting educators and field workers (n = ~70 assessing outcomes at the cluster and individual level. The survey will also be administered at one month, six months and 12 months post
李霞; 赵冬雪
2015-01-01
根据水声信道的传播特性和簇结构水声传感器网络的结构特征，结合接收信号的信干噪比定义了分簇网络中节点成功传输概率和网络吞吐量密度等性能指标，阐述了上述各参数之间的内在联系，建立了基于多信道的簇结构水声传感器网络容量分析模型。通过计算机仿真分析了干扰受限网络中在随机选择信道及按节点 ID 号划分信道两种信道分配方式下的网络容量性能。仿真结果表明在给定网络监测范围条件下，存在某个最优的节点密度使网络的容量得到最大值。此外，还分析了网络的工作频段、节点发射功率以及簇结构覆盖区域大小对网络容量性能的影响，并通过仿真验证了理论分析的有效性，为簇结构水声传感器网络的应用设计提供一定的参考依据。%A network capacity analytic model based on the propagation characteristics of underwater acoustic channel and structure features of cluster-based underwater acoustic sensor network is presented in this paper.On the basis of SINR,the success probability,and throughput density are defined accordingly.The computer simulation analysis is carried on the interference limited network under two different channel allocation methods:the randomly selecting way and the stationary dividing way by nodes’IDs.The results show that under a given network monitoring scope, there is always an optimal network node density which can result in the maximum capacity.The influence of the network’s frequency,the node’s emission power and the size of the cluster structure on network capacity is also studied in this paper.The simulation results verify the validity of the theoretical analysis,and can provide some reference for the application design of the cluster-based underwater acoustic sensor networks.
Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C. S.
1985-01-01
The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales.
Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C.S.
1985-08-01
The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales. 30 references.
黎冰; 高玉峰; 沙成明; 童小东
2012-01-01
To accurately determine the maximum pull-out loading capacity of suction caisson foundation in sand, the limit equilibrium method is applied. Based on the mechanical characteristics of suction caisson foundation with horizontal translation, a method for three-dimensional limit equilibrium analysis of maximum pull-out loading capacity of suction caisson foundation in sand is proposed. In the proposed method, the development process of earth pressure and shear resistance with displacement, and the characteristics of different earth pressure and side shear resistance over the caisson cross-section are considered. The earth pressure acting on the caisson is assumed to obey the Winkler model and is not in excess of the limiting earth pressure. The shear resistance between caisson and soil is assumed to be linearly proportional to the relative displacement between them before reaching its ultimate value. Fifteen model tests of suction caisson foundation under horizontal loading in sand are conducted to investigate its pull-out behaviors, and the load-displacement curves are obtained. The calculation results by the proposed method agree well with the experimental results, indicating that the proposed method is accurate and effective. Key w%为了准确确定砂土中吸力式沉箱基础的最大承载力,应用极限平衡法对其进行分析.基于吸力式沉箱基础平动时的受力特点,充分考虑土压力和摩擦力的发挥过程以及沉箱横截面上各点土压力大小的不同,提出了一种计算砂土中吸力式沉箱基础最大承载力的三维极限平衡方法.方法中假定沉箱侧壁土压力满足Winkler模型,但其值不超过水平极限土压力值;假定沉箱侧壁与地基土之间的摩擦力在达到最大值之前与两者之间的相对位移成线性正比关系.开展了15组水平荷载作用下吸力式沉箱基础的模型试验,得到了吸力式沉箱基础的荷载-位移曲线.利用所提方法得到的计
史先进
2011-01-01
Based on the principle of network traffic capacity statistics of website cluster or multi-website, network traffic capacity statistics of website cluster or multi-website analysis system was designed: by means of three-layer design patterns ( entity, data accessing, service logic ) , a layered architecture of the system is presented which realizes traffic capacity statistics of website cluster or multi-website and improves distinctly the performance of the system in load capacity, efficiency and stability. The system can provide a reference for revising or optimizing the strategy of network marketing for user and administration authority.%基于网站群或多网站的网络流量统计原理,设计了网站群或多网站的流量统计分析系统:通过三层设计模式(实体、数据访问、业务逻辑)对系统进行了分层架构,实现了网站群或多网站的流量统计功能,网站的承载量、运行效率和稳定性有了较大的提高.该设计可为用户和管理部门修正或优化网络营销策略提供参考依据.
Mari Lucia Campos
2007-12-01
Full Text Available A alta toxicidade do As aos animais e humanos e a possibilidade de existência de grande número de áreas contaminadas tornam imprescindível o conhecimento do teor semitotal em solos ditos não-contaminados e dos processos de adsorção do As em solos de carga variável. O objetivo deste trabalho foi determinar o teor e a capacidade máxima de adsorção de As (CMADS AS em Latossolos. O teor total foi determinado pelo método USEPA 3051A, e a CMADS As, com auxílio de isotermas de Langmuir com base nos valores de adsorção obtidos em dose de As (0, 90, 190, 380, 760 e 1.150 µmol L-1 (relação solo:solução final = 1:100, a pH 5,5 e força iônica de 15 mmol L-1. Os 17 Latossolos apresentaram teor médio total de As de 5,92 mg kg-1 e CMADS As média de 2.013 mg kg-1. O teor de argila e os óxidos de Fe e Al apresentaram influência positiva na CMADS As.In view of the toxicity of As for man and animals and the possibility of existence of a great number of contaminated areas it is imperative to know the total As content in soils considered uncontaminated and about As sorption processes in soils of variable charge. The objective of this work was to determine the total content and maximum capacity of As adsorption (CMADS As in Oxisols. The total content was determined by the USEPA 3051A method. The cmADS As was determined by the Langmuir Isotherms using six solution concentrations (0, 0.09, 0.19, 0.38, 0.76, 1.15 mmol L-1 (1:100 soil: solution ratio, pH values 5.5 and ionic strength 15 mmol L-1. In the 17 Oxisols the average total As content was 5.92 mg kg-1 and mean cmADS As was 2.013 mg kg-1. Clay, and Fe and Al oxides content influenced cmADSs positively.
Carla Aparecida Cielo
2012-06-01
Full Text Available OBJETIVO: verificar e correlacionar os tempos máximos de fonação (TMF de vogais, a capacidade vital (CV e os tipos de afecções laríngeas (AL de mulheres com disfonia organofuncional (DOF. MÉTODO: pesquisa retrospectiva, transversal, exploratória, não experimental, quantitativa, com banco de dados de medidas de TMF [a, i, u], de CV e de AL de mulheres com DOF; e os testes estatísticos Qui- quadrado e exato de Fisher, para verificar as diferenças entre as variáveis e suas relações e o teste binomial, a fim de verificar a significância de proporção ou percentual da análise descritiva, com pPURPOSE: to determine and to correlate the maximum phonation times (MPT of vowels, vital capacity (VC and laryngeal disorders (LD for women with benign organic lesions resulting from vocal misuse or abuse (BOL. METHOD: retrospective, transverse, exploratory, non-experimental, quantitative study, with measurement database of MPT [a, i, u], VC and LD of women with BOL, and Chi-Square statistic and exact tests of Fisher in order to investigate the differences between the variables and their relationships and a binomial test in order to check the significance of proportion or percentage of descriptive analysis, with p<0.05. RESULTS: the majority (22; 75.86% showed MPT significantly reduced (p = 0.0053 and seven (24.14% normal MPT. The normal VC was statistically significant (p = 0.0001 (26; 89.66%, but three women (10.34% showed it to be reduced. There was significant dominance of vocal nodules (p = 0.0016 (22; 75.86%, followed by Reinke's edema (6, 20.69% and vocal polyp (1; 3.45%. Among the 22 woman (75.86% which showed reduced MPT, there was a predominance with normal VC (19; 86.36%, although no statistical significance (p = 0,558. All the individuals with normal MPT showed VC normal (7; 100%. The majority with BOL showed normal VC, although not statistically significant (p=0,199. There was a predominance of vocal nodules and reduced MPT (16; 72
杨自辉; 符卓; 雷定猷; 张红
2012-01-01
In recent years, logistics industry has developed rapidly. Eespecially in 2009, the logistics industry is included in the ＂ten industrial revitalization plan＂, its development is change rapidly. In this paper, it analyzes storage capacity by logistics industry cluster, construct the storage capacity of several mathematical models, and it derives by these mathematical models six grid model from the storage capacity and social resources consumption, bases on the six grid model analysis, it finds the longitudinal logistics industry cluster storage capacity equilibrium state, also, it is the node storage supply optimal state.%近年来，物流产业发展迅速，尤其是2009年，物流产业被纳入“十大产业振兴规划”以来，发展更是日新月异。通过对纵向物流产业集群的仓储能力分析，构建了体现仓储能力的数学模型，并利用这些数学模型推导出了仓储能力与社会资源消耗的六宫格模型，通过对六宫格图形的逐一分析，找到了纵向物流产业集群的仓储能力均衡状态，也就是节点仓储能力最优状态。
Spanning Tree Based Attribute Clustering
Zeng, Yifeng; Jorge, Cordero Hernandez
2009-01-01
inconsistent edges from a maximum spanning tree by starting appropriate initial modes, therefore generating stable clusters. It discovers sound clusters through simple graph operations and achieves significant computational savings. We compare the Star Discovery algorithm against earlier attribute clustering...
Fábio Broggi
2011-02-01
Full Text Available O Fator Capacidade de Fósforo (FCP é definido pela razão de equilíbrio entre o fator quantidade de P (Q e o fator intensidade (I e representa uma medida da capacidade do solo em manter um determinado nível de P em solução. As características e o teor dos constituintes minerais da fração argila são responsáveis por uma maior ou menor FCP, interferindo nas relações solo-planta. Por outro lado, o pH do solo tem, em alguns casos, mostrado-se com efeito na adsorção e, em outros, com pequena e não consistente alteração na Capacidade Máxima de Adsorção de P (CMAP. Objetivou-se, neste trabalho, determinar o FCP de solos mineralogicamente diferentes em Pernambuco; correlacionar características físicas e químicas dos solos com o FCP; e avaliar o efeito do pH na CMAP. Amostras subsuperficiais de quatro solos, mineralogicamente diferentes, foram caracterizadas química e fisicamente e determinado o FCP. Essas amostras foram corrigidas com CaCO3 e MgCO3 na proporção 4:1 e incubadas por 30 dias, com exceção do Vertissolo. Determinou-se a CMAP antes e após a correção dos solos. O experimento consistiu de um fatorial 4 x 2 (quatro solos com e sem correção, distribuídos em blocos ao acaso, com três repetições. As características dos solos que melhor refletiram o FCP foram o P remanescente (P-rem e a CMAP. Independentemente dos constituintes mineralógicos da fração argila, solos com elevados teores de alumínio apresentaram aumento da CMAP com a correção. A energia de adsorção (EA nos solos corrigidos foi, em média, significativamente menor, independentemente do solo.Phosphate Maximum Capacity (FCP is defined by the ratio of equilibrium between the amount of factor P (Q and factor intensity (I and represents a measure of the soil ability to maintain a certain level of P in solution. The characteristics and content of the constituents of clay minerals are responsible for a greater or lesser FCP, interfering in soil
中小企业集群创新能力差异性与绩效分析%Differences in Innovation Capacity and Performance Analysis of SMEs Clusters
范如国; 蔡海霞; 李星
2012-01-01
融合新增长理论争演化经济学理论,从开放创新的视角,以全国54个国家级高新技术产业开发区为样本,分析了中小企业集群创新能力与集群绩效之间的作用机制.实证结果表明·集群内企业的创新投入对集群国际市场绩效影响最大.其中,集群科技活动人员投入只对集群的国际市场绩效产生影响,而集群内科技活动经费投入对集群国际、国内市场绩效以及生产缋效均产生显著的正面影响.大学R&D经费支出、研究机构R&D经费支出和省内技术市场成交合同金额等开放创新来源均对集群创新能力产生正向溢出效应,从而影响集群绩效.%By integrating new growth theory and evolutionary economics theory and using new perspective of open innovation, the paper takes the 54 state-level high-tech industrial development zones as a sample. It analyzes the mechanism between SME clusters' innovation capacity and cluster performance. The empirical results show that innovation input of enterprises in the cluster has the most important impact on the international market of clusters. Among them, the research human input of cluster only has impact on the international market performance, while de research expenditure input of cluster has positive impact on international market, domestic market performance and production performance. Open innovation resources as University R & D expenditure, research institution's R &.D expenditures and provincial technical market transactions contract amount have positive spillovers on the cluster innovation capabilities and therefore affect the cluster performance.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
罗铭; 刚傲
2016-01-01
In the modernization process of social transformation and development ,one of the important factors which influ-ence and restrict the Beijing -Tianjin - Hebei(BTH)wing area one of reality for the sustainable development is water short-age .This research is quantitative research by the method of carrying population in BTH region water can carry the size of population and economic development .Model validation results show that in the short term ,water quantity of the BTH re-gion population carrying capacity of rich ,can alleviate the contradiction between supply and demand of water resources ;But since the 2025 years ,because the population of the region water resources carrying capacity is insufficient ,the BTH region again go into a period of scarcity .After the implementation of south - to - north water diversion project ,BTH region ob-tained a certain degree of increase ,the water resources carrying capacity in population growth ,however ,under the situation of water resources carrying population are almost limited .Based on this ,the problem of water shortages in exploring the BTH resolution strategy ,not just focus on water saving and water diversion engineering problems ,also need to include for the population growth factors ,avoid excessive increase in population .Otherwise ,the results obtained with water ,is likely to be excessive increase of the population consume ,not really solve the problem of water shortage .%应用“承载人口数”方法定量研究了京、津、冀地区总的水资源所能承载的人口数量与经济发展规模。模型验证结果表明：在短期内，京、津、冀地区的水资源量人口承载能力富足，能缓解水资源供需矛盾；但从2025年后，由于该地区水资源量人口承载能力不足，则京、津、冀地区又重新陷入水资源匮乏时期。南水北调实施以后，京、津、冀地区水资源承载力获得了一定程度的增加，然而在人口增长的态势下，水资
张晓东; 王江波; 董慧峰; 蒋小亮; 陈晨
2012-01-01
分析了产业集聚区供电保障能力评估的研究框架,提出了用电关联度、用电充裕度指标,结合BCG模型及集聚区供用电现状构建了基于优先关注度矩阵的产业集聚区供电保障能力评估模型,并以河南省产业集聚区供用电为例进行分析,为优化河南省电力公司集聚区后期的电网规划方案提供了依据.%Based on the analysis of evaluating framework of power supply capacity in industrial cluster district, this paper puts forward the evaluating indices including power supply adequacy and power correlation, establishes a model combined with the advantage of BCG matrix (Boston matrix) and the present situation of power supply, which is to be applied to the evaluation of power supply capacity. Taking Henan province for an example, it provides basis for optimizing power grid planning in cluster district.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
孙俊
2009-01-01
为了提高图像阈值分割算法的应用广适性和处理实时性,该文在二维最大类间方差分割算法的基础上,研究邻域模板尺寸对最佳阈值的影响,将图像的灰度值、邻域尺寸及邻域均值进行遗传基因编码,利用遗传算法得到阈值最优解的小范围,在此小范围内进行二次遗传算法运算寻求全局最优解.将此基于两级遗传算法的二维最大类间方差分割算法应用于黄瓜计算机视觉识别目标试验中,试验结果表明,在计算类间方差次数上,基于两级遗传算法的二维最大类间方差算法分别为二维最大类间方差耗时的0.18%和一维Otsu算法耗时的46.87%,耗时上也较传统二维最大类间方差算法和一维Otsu算法有很大缩短,分割效果也有了明显改善.同时该算法也为目标识别领域提供了一种新型的实时图像分割方法,具有一定的推广价值.%In order to improve the wide adaptability and the real-time processing property of image threshold segmentation algorithm, a 2D maximum between-cluster variance image segmentation algorithm was brought forward. On the base of 2D maximum between-cluster variance algorithm, the impact of the size of the neighborhood template on the best threshold value was studied, and not only the gray level information of each pixel and its spatial correlation information within the neighborhood, but also the dimension of neighborhood domain were encoded by genetic factors. The small range of the optimal threshold was gotten based on genetic algorithm, and in the small range, the global optimal threshold was found based on the second genetic algorithm computing. The improved algorithm was applied into cucumber computer vision system. The experiment results showed that, the consuming time of computing between-cluster variance of 2D maximum between-cluster variance algorithm based on two level genetic algorithm was 0.18% more than that of 2D maximum between-cluster variance
Landex, Alex
2011-01-01
Stations are often limiting the capacity of railway networks. This is due to extra need of tracks when trains stand still, trains turning around, and conflicting train routes. Although stations are often the capacity bottlenecks, most capacity analysis methods focus on open line capacity. Therefore......, this paper presents methods to analyze station capacity. Four methods to analyze station capacity are developed. The first method is an adapted UIC 406 capacity method that can be used to analyze switch zones and platform tracks at stations that are not too complex. The second method examines the need...... the probability of conflicts and the minimum headway times into account. The last method analyzes how optimal platform tracks are used by examining the arrival and departure pattern of the trains. The developed methods can either be used separately to analyze specific characteristics of the capacity of a station...
Multicast Capacity Scaling of Wireless Networks with Multicast Outage
Liu, Chun-Hung
2010-01-01
Multicast transmission has several distinctive traits as opposed to more commonly studied unicast networks. Specially, these include (i) identical packets must be delivered successfully to several nodes, (ii) outage could simultaneously happen at different receivers, and (iii) the multicast rate is dominated by the receiver with the weakest link in order to minimize outage and retransmission. To capture these key traits, we utilize a Poisson cluster process consisting of a distinct Poisson point process (PPP) for the transmitters and receivers, and then define the multicast transmission capacity (MTC) as the maximum achievable multicast rate times the number of multicast clusters per unit volume, accounting for outages and retransmissions. Our main result shows that if $\\tau$ transmission attempts are allowed in a multicast cluster, the MTC is $\\Theta\\left(\\rho k^{x}\\log(k)\\right)$ where $\\rho$ and $x$ are functions of $\\tau$ depending on the network size and density, and $k$ is the average number of the inte...
Cluster as the Institute of Reindustrialization Territorial and Industrial Complexes
Shevchenko Inna, K.
2016-03-01
Full Text Available In the context of reindustrialization of the economy, which is characteristic not only for developing but also for developed countries, one of the major policy areas of capacity building and restoration of the industrial production growth rates is the maximum concentration of existing capacity. One of the basic mechanisms of concentration of scientific-technical and production potential is a cluster, which is concentrated by geography group of related companies and organizations interact with each other in order to reduce investment costs and facilitate the search for highly specialized experts, as well as access to new technologies, methods Management, based suppliers and buyers. Currently in operation on the territory of Russian clusters that are based on the Soviet scientific-production associations generally have a positive effect on the economy of the region, including its investment and innovation, which allows us to consider the clusters as institutions implementing the strategy of reindustrialization.
Progressive Exponential Clustering-Based Steganography
Li Yue
2010-01-01
Full Text Available Cluster indexing-based steganography is an important branch of data-hiding techniques. Such schemes normally achieve good balance between high embedding capacity and low embedding distortion. However, most cluster indexing-based steganographic schemes utilise less efficient clustering algorithms for embedding data, which causes redundancy and leaves room for increasing the embedding capacity further. In this paper, a new clustering algorithm, called progressive exponential clustering (PEC, is applied to increase the embedding capacity by avoiding redundancy. Meanwhile, a cluster expansion algorithm is also developed in order to further increase the capacity without sacrificing imperceptibility.
Landex, Alex
2011-01-01
Stations are often limiting the capacity of railway networks. This is due to extra need of tracks when trains stand still, trains turning around, and conflicting train routes. Although stations are often the capacity bottlenecks, most capacity analysis methods focus on open line capacity. Therefore......, this paper presents methods to analyze station capacity. Four methods to analyze station capacity are developed. The first method is an adapted UIC 406 capacity method that can be used to analyze switch zones and platform tracks at stations that are not too complex. The second method examines the need...... for platform tracks and the probability that arriving trains will not get a platform track immediately at arrival. The third method is a scalable method that analyzes the conflicts in the switch zone(s). In its simplest stage, the method just analyzes the track layout while the more advanced stages also take...
Marcia R Weaver
Full Text Available TRIAL DESIGN: Best practices for training mid-level practitioners (MLPs to improve global health-services are not well-characterized. Two hypotheses were: 1 Integrated Management of Infectious Disease (IMID training would improve clinical competence as tested with a single arm, pre-post design, and 2 on-site support (OSS would yield additional improvements as tested with a cluster-randomized trial. METHODS: Thirty-six Ugandan health facilities (randomized 1∶1 to parallel OSS and control arms enrolled two MLPs each. All MLPs participated in IMID (3-week core course, two 1-week boost sessions, distance learning. After the 3-week course, OSS-arm trainees participated in monthly OSS. Twelve written case scenarios tested clinical competencies in HIV/AIDS, tuberculosis, malaria, and other infectious diseases. Each participant completed different randomly-assigned blocks of four scenarios before IMID (t0, after 3-week course (t1, and after second boost course (t2, 24 weeks after t1. Scoring guides were harmonized with IMID content and Ugandan national policy. Score analyses used a linear mixed-effects model. The primary outcome measure was longitudinal change in scenario scores. RESULTS: Scores were available for 856 scenarios. Mean correct scores at t0, t1, and t2 were 39.3%, 49.1%, and 49.6%, respectively. Mean score increases (95% CI, p-value for t0-t1 (pre-post period and t1-t2 (parallel-arm period were 12.1 ((9.6, 14.6, p<0.001 and -0.6 ((-3.1, +1.9, p = 0.647 percent for OSS arm and 7.5 ((5.0, 10.0, p<0.001 and 1.6 ((-1.0, +4.1, p = 0.225 for control arm. The estimated mean difference in t1 to t2 score change, comparing arm A (participated in OSS vs. arm B was -2.2 ((-5.8, +1.4, p = 0.237. From t0-t2, mean scores increased for all 12 scenarios. CONCLUSIONS: Clinical competence increased significantly after a 3-week core course; improvement persisted for 24 weeks. No additional impact of OSS was observed. Data on clinical practice
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
33 CFR 183.53 - Horsepower capacity.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Horsepower capacity. 183.53...) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Safe Powering § 183.53 Horsepower capacity. The maximum horsepower capacity marked on a boat must not exceed the horsepower capacity determined by the...
Maria Regina Machado Perrotti
2000-02-01
Full Text Available Objetivo: comparar a capacidade de diagnosticar oligoâmnio pela ultra-sonografia por meio de diferentes valores do maior bolsão de líquido amniótico, em comparação ao índice de líquido amniótico (ILA, em gestantes normais, de 36 a 42 semanas de gestação. Métodos: realizou-se um estudo descritivo de validação de método diagnóstico, incluindo 875 gestantes normais. Mediante um exame ultra-sonográfico obstétrico de rotina, foi feita a medida do maior bolsão de líquido amniótico para o diagnóstico de oligoâmnio, utilizando como padrão-ouro o índice de líquido amniótico. Os dados foram analisados por meio do cálculo da sensibilidade e da especificidade da medida do maior bolsão de líquido amniótico, utilizando os diferentes pontos de corte de 10, 20 e 30 mm, em comparação aos valores normais do índice de líquido amniótico determinados pelos percentis 2,5 e 10 nas respectivas idades gestacionais. Resultados: a medida do maior bolsão de líquido amniótico apresenta baixa sensibilidade para diagnosticar oligoâmnio quando se adotam os pontos de corte 10 e 20 mm, e boa sensibilidade e especificidade quando se adota 30 mm, quando comparadas às medidas do índice de líquido amniótico nos percentis 2,5 e 10 da curva normal. A sensibilidade e especificidade da medida do maior bolsão são melhores, quando se adota o ponto de corte de 30 mm para diagnosticar oligoâmnio em comparação ao percentil 2,5. Conclusões: a capacidade de diagnosticar oligoâmnio pela medida do maior bolsão é satisfatória apenas com o ponto de corte em 30 mm.Purpose: to compare the capacity of diagnosing oligohy-dramnios by ultrasound using different measures of the maximum pool depth of amniotic fluid in comparison to the amniotic fluid index among normal pregnant women from the 36th to the 42nd week of gestation. Methods: a descriptive study of diagnostic validity was perfomed, on 875 normal pregnant women who were studied through
Schroll, Henning; Andersen, Jan; Kjærgård, Bente
2012-01-01
A spatial planning act was introduced inIndonesia 1992 and renewed in 2008. It emphasised the planning role of decentralised authorities. The spatial planning act covers both spatial and environmental issues. It defines the concept of carrying capacity and includes definitions of supportive...... carrying capacity (SCC) and assimilative carrying capacity (ACC). The act mandates that the latter two aspects must be taken into consideration in the local spatial plans. The present study aimed at developing a background for a national guideline for carrying capacity in Indonesian provinces and districts...... standard or governmental political objective exists. In most cases it was possible to select a set of indicators, including thresholds that are workable in a carrying capacity planning at the local administrative levels. Not all relevant sectors at the decentralized level were included. Indicators of SCC...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
K. Venkata Subbaiah
2010-01-01
Full Text Available The nodes in the mobile ad hoc networks act as router and host, the routing protocol is the primary issue and has to be supported before any applications can be deployed for mobile ad hoc networks. In recent many research protocols are proposed for finding an efficient route between the nodes. But most of the protocol’s that uses conventional techniques in routing; CBRP is a routing protocol that has a hierarchical-based design. This protocol divides the network area into several smaller areas called cluster. We propose a fuzzy logic based cluster head election using energy concept forcluster head routing protocol in MANET’S. Selecting an appropriate cluster head can save power for the whole mobile ad hoc network. Generally, Cluster Head election for mobile ad hoc network is based on the distance to the centroid of a cluster, and the closest one is elected as the cluster head'; or pick a node with the maximum battery capacity as the cluster head. In this paper, we present a cluster head election scheme using fuzzy logic system (FLS for mobile ad hoc networks. Three descriptors are used: distance of a node to the cluster centroid, its remaining battery capacity, and its degree of mobility. The linguistic knowledge of cluster head election based on these three descriptors is obtained from a group of network experts. 27 FLS rules are set up based on the linguistic knowledge. The output of the FLS provides a cluster head possibility, and node with the highest possibility is elected as the cluster head. The performance of fuzzy cluster head selection is evaluated using simulation, and is compared to LEACH protocol with out fuzzy cluster head election procedures and showed the proposed work is efficient than the previous one.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Schroll, Henning; Andersen, Jan; Kjærgård, Bente
2012-01-01
A spatial planning act was introduced inIndonesia 1992 and renewed in 2008. It emphasised the planning role of decentralised authorities. The spatial planning act covers both spatial and environmental issues. It defines the concept of carrying capacity and includes definitions of supportive...... and ACC may increase the political focus on resources and environmental issues and may help to move local authorities towards a more holistic spatial planning approach. A carrying capacity approach could be an inspiration for local spatial planning in developing countries. A spatial planning act...... was introduced inIndonesia 1992 and renewed in 2008. It emphasised the planning role of decentralised authorities. The spatial planning act covers both spatial and environmental issues. It defines the concept of carrying capacity and includes definitions of supportive carrying capacity (SCC) and assimilative...
Dual capacity reciprocating compressor
Wolfe, Robert W.
1984-01-01
A multi-cylinder compressor 10 particularly useful in connection with northern climate heat pumps and in which different capacities are available in accordance with reversing motor 16 rotation is provided with an eccentric cam 38 on a crank pin 34 under a fraction of the connecting rods, and arranged for rotation upon the crank pin between opposite positions 180.degree. apart so that with cam rotation on the crank pin such that the crank throw is at its normal maximum value all pistons pump at full capacity, and with rotation of the crank shaft in the opposite direction the cam moves to a circumferential position on the crank pin such that the overall crank throw is zero. Pistons 24 whose connecting rods 30 ride on a crank pin 36 without a cam pump their normal rate with either crank rotational direction. Thus a small clearance volume is provided for any piston that moves when in either capacity mode of operation.
Reyes-Nava, J A; Beltran, M R; Michaelian, K
2002-01-01
Thermal stability properties and the melting-like transition of Na_n, n=13-147, clusters are studied through microcanonical molecular dynamics simulations. The metallic bonding in the sodium clusters is mimicked by a many-body Gupta potential based on the second moment approximation of a tight-binding Hamiltonian. The characteristics of the solid-to-liquid transition in the sodium clusters are analyzed by calculating physical quantities like caloric curves, heat capacities, and root-mean-square bond length fluctuations using simulation times of several nanoseconds. Distinct melting mechanisms are obtained for the sodium clusters in the size range investigated. The calculated melting temperatures show an irregular variation with the cluster size, in qualitative agreement with recent experimental results. However, the calculated melting point for the Na_55 cluster is about 40 % lower than the experimental value.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Hamer, George
2003-01-01
Beowulf clusters can provide a cost-effective way to compute numerical models and process large amounts of remote sensing image data. Usually a Beowulf cluster is designed to accomplish a specific set of processing goals, and processing is very efficient when the problem remains inside the constraints of the original design. There are cases, however, when one might wish to compute a problem that is beyond the capacity of the local Beowulf system. In these cases, spreading the problem to multiple clusters or to other machines on the network may provide a cost-effective solution.
Histamine headache; Headache - histamine; Migrainous neuralgia; Headache - cluster; Horton's headache; Vascular headache - cluster ... A cluster headache begins as a severe, sudden headache. The headache commonly strikes 2 to 3 hours after you fall ...
Yan, Donghui; Jordan, Michael I
2011-01-01
Inspired by Random Forests (RF) in the context of classification, we propose a new clustering ensemble method---Cluster Forests (CF). Geometrically, CF randomly probes a high-dimensional data cloud to obtain "good local clusterings" and then aggregates via spectral clustering to obtain cluster assignments for the whole dataset. The search for good local clusterings is guided by a cluster quality measure $\\kappa$. CF progressively improves each local clustering in a fashion that resembles the tree growth in RF. Empirical studies on several real-world datasets under two different performance metrics show that CF compares favorably to its competitors. Theoretical analysis shows that the $\\kappa$ criterion is shown to grow each local clustering in a desirable way---it is "noise-resistant." A closed-form expression is obtained for the mis-clustering rate of spectral clustering under a perturbation model, which yields new insights into some aspects of spectral clustering.
Gieles, M.
1993-01-01
Star clusters are observed in almost every galaxy. In this thesis we address several fundamental problems concerning the formation, evolution and disruption of star clusters. From observations of (young) star clusters in the interacting galaxy M51, we found that clusters are formed in complexes of stars and star clusters. These complexes share similar properties with giant molecular clouds, from which they are formed. Many (70%) of the young clusters will not survive the fist 10 Myr, due to t...
Capacity and Capacity Utilization in Fishing Industries
Kirkley, James E; Squires, Dale
1999-01-01
Excess capacity of fishing fleets is one of the most pressing problems facing the world's fisheries and the sustainable harvesting of resource stocks. Considerable confusion persists over the definition and measurement of capacity and capacity utilization in fishing. Fishing capacity and capacity utilization, rather than capital (or effort) utilization, provide the appropriate framework. This paper provides both technological-economic and economic definitions of capacity and excess capacity i...
Cunniffe, Siobhan [CRUK-MRC Gray Institute for Radiation Oncology and Biology, Department of Oncology, University of Oxford, Old Road Campus Research Building, Roosevelt Drive, Oxford OX3 7DQ (United Kingdom); O’Neill, Peter, E-mail: peter.oneill@oncology.ox.ac.uk [CRUK-MRC Gray Institute for Radiation Oncology and Biology, Department of Oncology, University of Oxford, Old Road Campus Research Building, Roosevelt Drive, Oxford OX3 7DQ (United Kingdom); Greenberg, Marc M. [Johns Hopkins University, Department of Chemistry, 3400 N. Charles St. , Baltimore, MD 21218 (United States); Lomax, Martine E. [CRUK-MRC Gray Institute for Radiation Oncology and Biology, Department of Oncology, University of Oxford, Old Road Campus Research Building, Roosevelt Drive, Oxford OX3 7DQ (United Kingdom)
2014-04-15
Highlights: • A dL lesion is not repaired as effectively as an AP site. • The repair of a cluster with dL and 8-oxodGuo lesions is compromised. • Delayed repair of the cluster leads to an increase in mutation frequency. - Abstract: A signature of ionizing radiation is the induction of DNA clustered damaged sites. Non-double strand break (DSB) clustered damage has been shown to compromise the base excision repair pathway, extending the lifetimes of the lesions within the cluster, compared to isolated lesions. This increases the likelihood the lesions persist to replication and thus increasing the mutagenic potential of the lesions within the cluster. Lesions formed by ionizing radiation include 8-oxo-7,8-dihydro-2′-deoxyguanosine (8-oxodGuo) and 2-deoxyribonolactone (dL). dL poses an additional challenge to the cell as it is not repaired by the short-patch base excision repair pathway. Here we show recalcitrant dL repair is reflected in mutations observed when DNA containing it and a proximal 8-oxodGuo is replicated in Escherichia coli. 8-oxodGuo in close proximity to dL on the opposing DNA strand results in an enhanced frequency of mutation of the lesions within the cluster and a 20 base sequence flanking the clustered damage site in an E. coli based plasmid assay. In vitro repair of a dL lesion is reduced when compared to the repair of an abasic (AP) site and a tetrahydrofuran (THF), and this is due mainly to a reduction in the activity of polymerase β, leading to retarded FEN1 and ligase 1 activities. This study has given insights in to the biological effects of clusters containing dL.
Prabhu, Ninad V.; Sharp, Kim A.
2005-05-01
Heat capacity (Cp) is one of several major thermodynamic quantities commonly measured in proteins. With more than half a dozen definitions, it is the hardest of these quantities to understand in physical terms, but the richest in insight. There are many ramifications of observed Cp changes: The sign distinguishes apolar from polar solvation. It imparts a temperature (T) dependence to entropy and enthalpy that may change their signs and which of them dominate. Protein unfolding usually has a positive ΔCp, producing a maximum in stability and sometimes cold denaturation. There are two heat capacity contributions, from hydration and protein-protein interactions; which dominates in folding and binding is an open question. Theoretical work to date has dealt mostly with the hydration term and can account, at least semiquantitatively, for the major Cp-related features: the positive and negative Cp of hydration for apolar and polar groups, respectively; the convergence of apolar group hydration entropy at T ≈ 112°C; the decrease in apolar hydration Cp with increasing T; and the T-maximum in protein stability and cold denaturation.
Ackerman, Margareta; Ben-David, Shai; Branzei, Simina
2012-01-01
We investigate a natural generalization of the classical clustering problem, considering clustering tasks in which different instances may have different weights.We conduct the first extensive theoretical analysis on the influence of weighted data on standard clustering algorithms in both...... the partitional and hierarchical settings, characterizing the conditions under which algorithms react to weights. Extending a recent framework for clustering algorithm selection, we propose intuitive properties that would allow users to choose between clustering algorithms in the weighted setting and classify...
Pirandola, Stefano; Lupo, Cosmo; Giovannetti, Vittorio; Mancini, Stefano; Braunstein, Samuel L.
2011-11-01
The readout of a classical memory can be modelled as a problem of quantum channel discrimination, where a decoder retrieves information by distinguishing the different quantum channels encoded in each cell of the memory (Pirandola 2011 Phys. Rev. Lett. 106 090504). In the case of optical memories, such as CDs and DVDs, this discrimination involves lossy bosonic channels and can be remarkably boosted by the use of nonclassical light (quantum reading). Here we generalize these concepts by extending the model of memory from single-cell to multi-cell encoding. In general, information is stored in a block of cells by using a channel-codeword, i.e. a sequence of channels chosen according to a classical code. Correspondingly, the readout of data is realized by a process of ‘parallel’ channel discrimination, where the entire block of cells is probed simultaneously and decoded via an optimal collective measurement. In the limit of a large block we define the quantum reading capacity of the memory, quantifying the maximum number of readable bits per cell. This notion of capacity is nontrivial when we suitably constrain the physical resources of the decoder. For optical memories (encoding bosonic channels), such a constraint is energetic and corresponds to fixing the mean total number of photons per cell. In this case, we are able to prove a separation between the quantum reading capacity and the maximum information rate achievable by classical transmitters, i.e. arbitrary classical mixtures of coherent states. In fact, we can easily construct nonclassical transmitters that are able to outperform any classical transmitter, thus showing that the advantages of quantum reading persist in the optimal multi-cell scenario.
Self-Organizing Tree Using Cluster Validity
Sasaki, Yasue; Suzuki, Yukinori; Miyamoto, Takayuki; Maeda, Junji
Self-organizing tree (S-TREE) models solve clustering problems by imposing tree-structured constraints on the solution. It has a self-organizing capacity and has better performance than previous tree-structured algorithms. S-TREE carries out pruning to reduce the effect of bad leaf nodes when the tree reaches a predetermined maximum size (U), However, it is difficult to determine U beforehand because it is problem-dependent. U gives the limit of tree growth and can also prevent self-organization of the tree. It may produce an unnatural clustering. In this paper, we propose an algorithm for pruning algorithm that does not require U. This algorithm prunes extra nodes based on a significant level of cluster validity and allows the S-TREE to grow by a self-organization. The performance of the new algorithm was examined by experiments on vector quantization. The results of experiments show that natural leaf nodes are formed by this algorithm without setting the limit for the growth of the S-TREE.
Capacity Analysis for Dynamic Space Networks
Yang Lu; Bo Li; Wenjing Kang; Gongliang Liu; Xueting Li
2015-01-01
To evaluate transmission rate of highly dynamic space networks, a new method for studying space network capacity is proposed in this paper. Using graph theory, network capacity is defined as the maximum amount of flows ground stations can receive per unit time. Combined with a hybrid constellation model, network capacity is calculated and further analyzed for practical cases. Simulation results show that network capacity will increase to different extents as link capacity, minimum ground elevation constraint and satellite onboard processing capability change. Considering the efficiency and reliability of communication networks, how to scientifically design satellite networks is also discussed.
25 CFR 168.5 - Grazing capacity.
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Grazing capacity. 168.5 Section 168.5 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER GRAZING REGULATIONS FOR THE HOPI PARTITIONED LANDS AREA § 168.5 Grazing capacity. (a) The Area Director shall prescribe the maximum number of...
Shanna Lara Miglioranzi
2011-12-01
Full Text Available OBJETIVO: verificar a relação entre capacidade vital (CV, tempos máximos de fonação de /e/ fechado emitido de forma áfona (TMF/ė/ e de /s/ (TMF/s/ e estatura em mulheres adultas. MÉTODO: 48 indivíduos do sexo feminino, entre 18 e 44 anos, com ausência de fatores intervenientes nas medidas de interesse (tabagistas, atletas, cantores, alterações pulmonares, articulatórias, tiveram suas medidas de CV, TMF/ė/ e TMF/s/ coletadas, três vezes cada, selecionando-se o maior valor obtido para cada variável, além da estatura auto-referida. Os valores das quatro variáveis do grupo foram comparados entre si por meio de análise estatística. Utilizou-se o coeficiente de correlação de Spearman para verificar sua relação; o teste de Wilcoxon para amostras relacionadas para comparar os TMF/s/ e TMF/ė/, além do cálculo do coeficiente de variação para comparar a homogeneidade dessas variáveis. RESULTADOS: correlação positiva significante entre: CV e TMF/s/ (r=0,326; P=0,024; CV e TMF/ė/ (r=0,379; P=0,008; TMF/s/ e TMF/ė/ (r=0,360; P=0,012; e CV e estatura (r=0,432; P=0,002. TMF/s/ significantemente maior do que TMF/ė/. TMF/ė/ da amostra (10,43s significantemente menor que os valores de referência (PPURPOSE: to check the relation among the values of vital capacity (CV, maximum phonation times (MPT of closed voiceless /e/ (/ė/ and of /s/ and height in adult normal women. METHOD: 48 females, between 18 and 44 years, with no intervening factors in measures of interest (smoking, sport practicing, singing, lung disorder, articulation disorder collected their measures of VC, MPT/ė/ and MPT/s/, three times each, and the highest produced values for each variable were selected for analysis, beyond the self-reported height. All four variables were compared. Spearman's correlation coefficient was used to check the relationship; Wilcoxon test for related samples was used to compare MPT/s/ and MPT/ė/, such as the coefficient of variation
Sanfilippo, Antonio P.; Calapristi, Augustin J.; Crow, Vernon L.; Hetzler, Elizabeth G.; Turner, Alan E.
2004-05-26
We present an approach to the disambiguation of cluster labels that capitalizes on the notion of semantic similarity to assign WordNet senses to cluster labels. The approach provides interesting insights on how document clustering can provide the basis for developing a novel approach to word sense disambiguation.
曹姗; 耿光飞; 彭宏
2014-01-01
For the problem of unequal grouping of parallel compensation capacity in substation, a new optimization method based on curve segmentation and clustering is proposed. Firstly, this paper calculates compensation capacity curve by transformer parameters and load curve and takes the maximal value as the sum of compensation capacity. Then it partitions this curve into several segments and clusters these segmentation results into K clusters, in which the number of cluster is equal to the number of capacitor groups, further modifies the clustering results in order to meet the condition of constraint, and takes the difference between two adjacent cluster results as the capacity of each group. In order to get the steady grouping results, it studies the relationship between segment number and grouping results. After the grouping plan is determined, taking the nine-area figure as control strategy. Finally, simulating with an equivalent practical power grid and load profile, the results show both the availability and rationality of this method that power loss is less than the method of equal capacity when capacitor is divided into 3 groups .%针对变电站并联补偿不等容优化分组问题，提出一种基于曲线分段-聚类法的优化分组算法。首先利用变压器参数与负荷曲线计算变电站总的无功需求曲线，将曲线最大值作为电容器总补偿量。然后利用曲线快速分段法和K-均值聚类法进行分段、聚类，使聚类数等于电容器分组数，并对聚类结果进行改进以满足各组容量之和等于总容量的约束，则各类间差值为分组容量。为得到稳定的聚类结果，以波动系数为指标研究分段数与分组容量结果之间的对应变化规律，给出选择分段个数的经验性结论。分组方案确定后，以九区图控制策略确定变压器分接头和电容器的投退。最后用算例验证了该方法的有效性及可行性，当电容器分组数相同时与等
Random Access Transport Capacity
Andrews, Jeffrey G; Kountouris, Marios; Haenggi, Martin
2009-01-01
We develop a new metric for quantifying end-to-end throughput in multihop wireless networks, which we term random access transport capacity, since the interference model presumes uncoordinated transmissions. The metric quantifies the average maximum rate of successful end-to-end transmissions, multiplied by the communication distance, and normalized by the network area. We show that a simple upper bound on this quantity is computable in closed-form in terms of key network parameters when the number of retransmissions is not restricted and the hops are assumed to be equally spaced on a line between the source and destination. We also derive the optimum number of hops and optimal per hop success probability and show that our result follows the well-known square root scaling law while providing exact expressions for the preconstants as well. Numerical results demonstrate that the upper bound is accurate for the purpose of determining the optimal hop count and success (or outage) probability.
Melike Ersoy
2014-06-01
Full Text Available In this study, capacity estimations with the incorporation of Highway Capacity Manual (HCM 2010 method are evaluated. Parameter based sensitivity analysis on calculations with the new HCM formula and a comparative evaluation of the new methodology with two most common capacity analysis methods, i.e., the method of critical gap acceptance and the method of regression analysis, are performed. Maximum and minimum headway intervals of follow up time and critical gap parameters are alternated within the sensitivity analysis. The Transport Research Laboratory formula for regression and Australian formula for gap acceptance method are considered in comparison. Relative comparisons of predictions on capacity by HCM2010 method, regression analysis and gap acceptance method are presented considering field data obtained by observations at two roundabouts in Izmir, Turkey. The results of the study show that the HCM2010 formula led to lower capacity estimates than regression analysis and higher estimates than the gap acceptance method. Regarding the real capacity observations under high circulating flow-rates the HCM2010 method yielded to more appropriate results than the regression method. In addition to comparisons, studies on the sensitivity analysis show that entry capacity estimates possess sharper changes as smaller follow up headways are accepted.
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Kneib, Jean-Paul; 10.1007/s00159-011-0047-3
2012-01-01
Clusters of galaxies are the most recently assembled, massive, bound structures in the Universe. As predicted by General Relativity, given their masses, clusters strongly deform space-time in their vicinity. Clusters act as some of the most powerful gravitational lenses in the Universe. Light rays traversing through clusters from distant sources are hence deflected, and the resulting images of these distant objects therefore appear distorted and magnified. Lensing by clusters occurs in two regimes, each with unique observational signatures. The strong lensing regime is characterized by effects readily seen by eye, namely, the production of giant arcs, multiple-images, and arclets. The weak lensing regime is characterized by small deformations in the shapes of background galaxies only detectable statistically. Cluster lenses have been exploited successfully to address several important current questions in cosmology: (i) the study of the lens(es) - understanding cluster mass distributions and issues pertaining...
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Targeting BSP Library for SMP Cluster%BSP库在SMP机群上的高效实现
董景毅; 丁俊; 孟睿; 童维勤
2001-01-01
Using commodity SMPs (shared memory processors) to build cluster-based supercomputer has become a mainstream trend. Yet programming this kind of supercomputer system requires an environment support both message passing and shared memory programming. This paper describes our preliminary work in an effort to target BSP library for cluster of SMPs. In order to exploit the maximum performance potential that a cluster of SMPs brings, we adopt thread technique to reduce system overhead and to exploit the capacity of SMPs. A three-layer synchronization mechanism is proposed to support barrier synchronization within an SMP node, a group of SMP nodes and the whole cluster respectively. A comparison is made between our BSP library and the currently available BSP libraries such as PUB.
Targeting BSP Library for SMP Cluster%BSP库在SMP机群上的高效实现
董景毅; 丁俊; 孟睿; 童维勤
2000-01-01
Using commodity SMPs (shared memory processors) to build cluster-based supercomputer has become a mainstream trend. Yet programming this kind of supercomputer system requires an environment support both message passing and shared memory programming. This paper describes our preliminary work in an effort to target BSP library for cluster of SMPs. In order to exploit the maximum performance potential that a cluster of SMPs brings, we adopt thread technique to reduce system overhead and to exploit the capacity of SMPs. A three-layer synchronization mechanism is proposed to support barrier synchronization within an SMP node, a group of SMP nodes and the whole cluster respectively. A comparison is made between our BSP library and the currently available BSP libraries such as PUB.
Wagstaff, Kiri L.
2012-03-01
On obtaining a new data set, the researcher is immediately faced with the challenge of obtaining a high-level understanding from the observations. What does a typical item look like? What are the dominant trends? How many distinct groups are included in the data set, and how is each one characterized? Which observable values are common, and which rarely occur? Which items stand out as anomalies or outliers from the rest of the data? This challenge is exacerbated by the steady growth in data set size [11] as new instruments push into new frontiers of parameter space, via improvements in temporal, spatial, and spectral resolution, or by the desire to "fuse" observations from different modalities and instruments into a larger-picture understanding of the same underlying phenomenon. Data clustering algorithms provide a variety of solutions for this task. They can generate summaries, locate outliers, compress data, identify dense or sparse regions of feature space, and build data models. It is useful to note up front that "clusters" in this context refer to groups of items within some descriptive feature space, not (necessarily) to "galaxy clusters" which are dense regions in physical space. The goal of this chapter is to survey a variety of data clustering methods, with an eye toward their applicability to astronomical data analysis. In addition to improving the individual researcher’s understanding of a given data set, clustering has led directly to scientific advances, such as the discovery of new subclasses of stars [14] and gamma-ray bursts (GRBs) [38]. All clustering algorithms seek to identify groups within a data set that reflect some observed, quantifiable structure. Clustering is traditionally an unsupervised approach to data analysis, in the sense that it operates without any direct guidance about which items should be assigned to which clusters. There has been a recent trend in the clustering literature toward supporting semisupervised or constrained
Accessible Capacity of Secondary Users
Ma, Xiao; Lin, Lei; Bai, Baoming
2010-01-01
A new problem formulation is presented for the Gaussian interference channels (GIFC) with two pairs of users, which are distinguished as primary users and secondary users, respectively. The primary users employ a pair of encoder and decoder that were originally designed to satisfy a given error performance requirement under the assumption that no interference exists from other users. In the case when the secondary users attempt to access the same medium, we are interested in the maximum transmission rate (defined as {\\em accessible capacity}) at which secondary users can communicate reliably without affecting the error performance requirement by the primary users under the constraint that the primary encoder (not the decoder) is kept unchanged. By modeling the primary encoder as a generalized trellis code (GTC), we are then able to treat the secondary link as a finite state channel (FSC). The relation of the accessible capacity to the capacity region of the GIFC is revealed. Upper and lower bounds on the acce...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
2011-01-01
@@ Cansisting of eight scientists from the State Key Laboratory of Physical Chemistry of Solid Surfaces and Xiamen University, this creative research group is devoted to the research of cluster chemistry and creation of nanomaterials.After three-year hard work, the group scored a series of encouraging progresses in synthesis of clusters with special structures, including novel fullerenes, fullerene-like metal cluster compounds as well as other related nanomaterials, and their properties study.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Spatial clustering of pixels of a multispectral image
Conger, James Lynn
2014-08-19
A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.
Spatial clustering of pixels of a multispectral image
Conger, James Lynn
2014-08-19
A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.
Clustered regression with unknown clusters
Barman, Kishor
2011-01-01
We consider a collection of prediction experiments, which are clustered in the sense that groups of experiments ex- hibit similar relationship between the predictor and response variables. The experiment clusters as well as the regres- sion relationships are unknown. The regression relation- ships define the experiment clusters, and in general, the predictor and response variables may not exhibit any clus- tering. We call this prediction problem clustered regres- sion with unknown clusters (CRUC) and in this paper we focus on linear regression. We study and compare several methods for CRUC, demonstrate their applicability to the Yahoo Learning-to-rank Challenge (YLRC) dataset, and in- vestigate an associated mathematical model. CRUC is at the crossroads of many prior works and we study several prediction algorithms with diverse origins: an adaptation of the expectation-maximization algorithm, an approach in- spired by K-means clustering, the singular value threshold- ing approach to matrix rank minimization u...
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Subspace clustering through attribute clustering
Kun NIU; Shubo ZHANG; Junliang CHEN
2008-01-01
Many recently proposed subspace clustering methods suffer from two severe problems. First, the algorithms typically scale exponentially with the data dimensionality or the subspace dimensionality of clusters. Second, the clustering results are often sensitive to input parameters. In this paper, a fast algorithm of subspace clustering using attribute clustering is proposed to over-come these limitations. This algorithm first filters out redundant attributes by computing the Gini coefficient. To evaluate the correlation of every two non-redundant attributes, the relation matrix of non-redundant attributes is constructed based on the relation function of two dimensional united Gini coefficients. After applying an overlapping clustering algorithm on the relation matrix, the candidate of all interesting subspaces is achieved. Finally, all subspace clusters can be derived by clustering on interesting subspaces. Experiments on both synthesis and real datasets show that the new algorithm not only achieves a significant gain of runtime and quality to find subspace clusters, but also is insensitive to input parameters.
49 CFR 193.2181 - Impoundment capacity: LNG storage tanks.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Impoundment capacity: LNG storage tanks. 193.2181... Impoundment capacity: LNG storage tanks. Each impounding system serving an LNG storage tank must have a minimum volumetric liquid impoundment capacity of: (a) 110 percent of the LNG tank's maximum...
46 CFR 154.806 - Capacity of pressure relief valves.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Capacity of pressure relief valves. 154.806 Section 154... Equipment Cargo Vent Systems § 154.806 Capacity of pressure relief valves. Pressure relief valves for each... pressure above the set pressure of the relief valves: (a) The maximum capacity of an installed cargo...
Optical information capacity of silicon
Dimitropoulos, Dimitris
2014-01-01
Modern computing and data storage systems increasingly rely on parallel architectures where processing and storage load is distributed within a cluster of nodes. The necessity for high-bandwidth data links has made optical communication a critical constituent of modern information systems and silicon the leading platform for creating the necessary optical components. While silicon is arguably the most extensively studied material in history, one of its most important attributes, an analysis of its capacity to carry optical information, has not been reported. The calculation of the information capacity of silicon is complicated by nonlinear losses, phenomena that emerge in optical nanowires as a result of the concentration of optical power in a small geometry. Nonlinear losses are absent in silica glass optical fiber and other common communication channels. While nonlinear loss in silicon is well known, noise and fluctuations that arise from it have never been considered. Here we report sources of fluctuations...
侯婷婷; 娄素华; 张滋华; 吴耀武
2012-01-01
针对风电基地风电外送的形势,提出了一种风电汇聚外送配套火电容量优化方法。针对风电的随机性,定义了输电通道的持续STC曲线,来分析输电通道输送风电后火电可用容量空间的特性。在此基础上,建立了风电外送配套火电容量优化模型,模型考虑了输电线路、配套火电的费用及输送电量收益,在风电优先外送的前提下,充分利用输电通道,使得经济效益最大化,并采用两层优化策略对模型进行求解。应用本文模型对一个算例系统进行了计算分析,并对电价和火电煤价对结果的影响进行了分析,结果证明了所提方法的正确性和有效性。%ince wind power is explored on a large scale and in a highly centralized way these years,and usually wind bases are inconsistent with load in geographic region,so transmitting wind power through high-voltage transmission line will be an inevitable trend.In this new situation,the paper presents an optimal methodology for corollary thermal sources transmitted with wind power together for wind power’s variability and low energy density.For the random nature of wind power,the duration curve of spare capacity of transmission line(STC) which can be used to transmit thermal power is introduced to illustrate characteristics of capacity for thermal power after transmitting wind power.Based on the duration curve of STC,the model for optimizing the capacity of corollary thermal sources is proposed,which takes into account transmission line costs,thermal sources costs and benefit of electric power transmitted,and the objective function being maximized is the total benefits.The model can be solved by a two-stage optimal strategy.The case studies are carried out for a system,where effects of coal price and electricity price on the optimal schemes is also studied,and the results verify the effectiveness of the presented method.
AVAILABLE SOIL WATER CAPACITY AS A DISCRIMINANT FACTOR IN MIXED OAK FOREST OF CENTRAL ITALY
A. TESTI
2004-05-01
Full Text Available Soil water content is a critical factor in Mediterranean forest vegetation, especially in areas subjected to prolonged summer drought where winter and autumn rainfall are the main sources of water. Available soil water capacity (AWC is the maximum amount of water available for plants that a soil could possibly contain. Each soil has a specific available water capacity, however, most of the published literature on AWC refers 10 agricultural settings, although the interaction between the soil and the vegetation dynamics has long been recognized. The aim of this study was to investigate whether this edaphic factor could be discriminant in species assemblage of communities belonging to the thermophylous oak forest (order Quercetalia pubescentis. Thirty-two vegetation relevés and soil profiles were carried out in five different sites, with a similar pluvio-thermic regime, located in the sub-coastal belt of Latium, Central Italy. From the physical\\-chemical analyses of soil profiles, the AWC values, of the related relevés, were calculated. Multivariate statistical analysis was applied to the vegetation surveys, using Cluster Analysis from which a classification in three different clusters was obtained; subsequently the AWC values were grouped according to the c1assification obtained. Analysis of variance was used to test similarity and the output pointed out a significant difference among the three clusters (F=6.35; P
AVAILABLE SOIL WATER CAPACITY AS A DISCRIMINANT FACTOR IN MIXED OAK FOREST OF CENTRAL ITALY
A. SERAFINI SAULI
2004-01-01
Full Text Available Soil water content is a critical factor in Mediterranean forest vegetation, especially in areas subjected to prolonged summer drought where winter and autumn rainfall are the main sources of water. Available soil water capacity (AWC is the maximum amount of water available for plants that a soil could possibly contain. Each soil has a specific available water capacity, however, most of the published literature on AWC refers 10 agricultural settings, although the interaction between the soil and the vegetation dynamics has long been recognized. The aim of this study was to investigate whether this edaphic factor could be discriminant in species assemblage of communities belonging to the thermophylous oak forest (order Quercetalia pubescentis. Thirty-two vegetation relevés and soil profiles were carried out in five different sites, with a similar pluvio-thermic regime, located in the sub-coastal belt of Latium, Central Italy. From the physical-chemical analyses of soil profiles, the AWC values, of the related relevés, were calculated. Multivariate statistical analysis was applied to the vegetation surveys, using Cluster Analysis from which a classification in three different clusters was obtained; subsequently the AWC values were grouped according to the c1assification obtained. Analysis of variance was used to test similarity and the output pointed out a significant difference among the three clusters (F=6.35; P
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
易力; 胡振华
2013-01-01
基于系统动力学方法,构建了吸收能力在知识转移过程中对集群企业自主创新影响的量化模型,采用Vensim PLE软件进行模拟分析,揭示了集群企业自主创新能力的提升路径.提出了三个仿真模型:一是渐近增长模型,较小的吸收能力制约了初创企业自主创新的步伐;二是指数增长模型,适当提高吸收能力可以突破知识存量增长的瓶颈,激进式企业家精神是企业成长不可或缺的因素;三是S型增长模型,动态的吸收能力与研发投入产出之间的相互协调是成熟企业的表现,拐点的出现标志着企业自主创新进入巩固阶段.%Based on system dynamic method, this paper set up three quantitative models of absorptive capacities during knowledge transfer affecting cluster firms' indigenous innovation and revealed an updating way of cluster firms' indigenous innovation ability by Vensim PLE simulation. Firstly, asymptotic growth model showed that the newly established firms' indigenous innovation is confined by low absorptive capacities. Secondly, exponential growth model demonstrated that properly enhancing absorptive capacities can break through growth bottleneck of knowledge store and radical entrepreneurship is an indispensable factor for firms to grow up. Thirdly, S-shaped growth model declared that mutual coordination between dynamic absorptive capacities and R&D input and output is representation of mature firms and appearance of inflection point indicated indigenous innovation entering into a stable stage.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Böcker, S.; Baumbach, Jan
2013-01-01
. The problem has been the inspiration for numerous algorithms in bioinformatics, aiming at clustering entities such as genes, proteins, phenotypes, or patients. In this paper, we review exact and heuristic methods that have been proposed for the Cluster Editing problem, and also applications......The Cluster Editing problem asks to transform a graph into a disjoint union of cliques using a minimum number of edge modifications. Although the problem has been proven NP-complete several times, it has nevertheless attracted much research both from the theoretical and the applied side...
Ackerman, Margareta; Branzei, Simina; Loker, David
2011-01-01
In this paper we investigate clustering in the weighted setting, in which every data point is assigned a real valued weight. We conduct a theoretical analysis on the influence of weighted data on standard clustering algorithms in each of the partitional and hierarchical settings, characterising the precise conditions under which such algorithms react to weights, and classifying clustering methods into three broad categories: weight-responsive, weight-considering, and weight-robust. Our analysis raises several interesting questions and can be directly mapped to the classical unweighted setting.
Everitt, Brian S; Leese, Morven; Stahl, Daniel
2011-01-01
Cluster analysis comprises a range of methods for classifying multivariate data into subgroups. By organizing multivariate data into such subgroups, clustering can help reveal the characteristics of any structure or patterns present. These techniques have proven useful in a wide range of areas such as medicine, psychology, market research and bioinformatics.This fifth edition of the highly successful Cluster Analysis includes coverage of the latest developments in the field and a new chapter dealing with finite mixture models for structured data.Real life examples are used throughout to demons
Ramin Payrovi
2007-08-01
Full Text Available Best Performace: With our Hipax Cluster PACS Server solution we are introducing the parallel computing concept as an extremely fast software system to the PACS world. In contrast to the common PACS servers, the Hipax Cluster PACS software is not only restricted to one or two computers, but can be used on a couple of servers controlling each other."nThus, the same services can be run simultaneously on different computers. The scalable system can also be expanded subsequently without lost of per-formance by adding further processors or Hipax server units, for example, if new clients or modalities are to be connected."nMaximum Failure Security: The Cluster Server concept offers high failure security. If one of the server PCs breaks down, the services can be assumed by another Hipax server unit, temporarily. If the overload of one of the server PCs is imminent, the services will be carried out by another Hipax unit (load balancing. To increase the security, e.g. against fire, the single Hipax servers can also be located separately. This concept offers maximum security, flexibility, performance, redundancy and scalability."nThe Hipax Cluster PACS Server is easy to be administrated using a web interface. In the case of a system failure (e.g. overloading, breakdown of a server PC the system administrator receives a mes-sage via Email and is so enabled to solve the problem."nFeatures"n• Based on SQL database"n• Different services running on separate PCs"n• The Hipax Server unis are coordinated and able to control each other"n• Exponentiates the power of a cluster server to the whole PACS (more processors"n• Scalable to the demands"n• Maximum performance"n• Load balancing for optimum efficiency"n• Maximum failure security because of expo-nentiated redundancy"n• Warning Email automatically sent to the system administrator in the case of failure"n• Web interface for system configuration"n• Maintenance without shut down the system
Capacity Statement for Railways
Landex, Alex
2007-01-01
The subject “Railway capacity” is a combination of the capacity consumption and how the capacity is utilized. The capacity utilization of railways can be divided into 4 core elements: The number of trains; the average speed; the heterogeneity of the operation; and the stability. This article...... describes how the capacity consumption for railways can be worked out and analytical measurements of how the capacity is utilized. Furthermore, the article describes how it is possible to state and visualize railway capacity. Having unused railway capacity is not always equal to be able to operate more...
Berks, G.; Keyserlingk, Diedrich Graf von; Jantzen, Jan
2000-01-01
A symptom is a condition indicating the presence of a disease, especially, when regarded as an aid in diagnosis.Symptoms are the smallest units indicating the existence of a disease. A syndrome on the other hand is an aggregate, set or cluster of concurrent symptoms which together indicate...... and clustering are the basic concerns in medicine. Classification depends on definitions of the classes and their required degree of participant of the elements in the cases' symptoms. In medicine imprecise conditions are the rule and therefore fuzzy methods are much more suitable than crisp ones. Fuzzy c......-mean clustering is an easy and well improved tool, which has been applied in many medical fields. We used c-mean fuzzy clustering after feature extraction from an aphasia database. Factor analysis was applied on a correlation matrix of 26 symptoms of language disorders and led to five factors. The factors...
Classical information capacity of superdense coding
Bowen, G H
2001-01-01
Classical communication through quantum channels may be enhanced by sharing entanglement. Superdense coding allows the encoding, and transmission, of up to two classical bits of information in a single qubit. In this paper, the maximum classical channel capacity for states that are not maximally entangled is derived. Particular schemes are then shown to attain this capacity, firstly for pairs of qubits, and secondly for pairs of qutrits.
张锐丽; 史凤隆; 高万春
2013-01-01
Maintenance capability assessment involves many measurable indicators , and how to streamline a large number of in-dex values is a hot research problem .We used the factor analysis to integrate various indicators , considered their relevance , and then extracted the common factors .According to the common factors which represent the maintenance indicators , we reintegrated the original data , carried out the groups divided with systematic cluster .%维修保障能力中涉及衡量的指标值较多，如何对大量的指标值进行精简，是当前评估保障能力研究的热点。本文使用因子分析先将指标综合，考虑其相关性，提取公共因子，然后根据公因子代表的维修指标重新评估维修保障能力。
Martín-Herrero, J.
2004-10-01
I present a hybrid method for the labelling of clusters in two-dimensional lattices, which combines the recursive approach with iterative scanning to reduce the stack size required by the pure recursive technique, while keeping its benefits: single pass and straightforward cluster characterization and percolation detection parallel to the labelling. While the capacity to hold the entire lattice in memory is usually regarded as the major constraint for the applicability of the recursive technique, the required stack size is the real limiting factor. Resorting to recursion only for the transverse direction greatly reduces the recursion depth and therefore the required stack. It also enhances the overall performance of the recursive technique, as is shown by results on a set of uniform random binary lattices and on a set of samples of the Ising model. I also show how this technique may replace the recursive technique in Wolff's cluster algorithm, decreasing the risk of stack overflow and increasing its speed, and the Hoshen-Kopelman algorithm in the Swendsen-Wang cluster algorithm, allowing effortless characterization during generation of the samples and increasing its speed.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Algorithm of capacity expansion on networks optimization
无
2003-01-01
The paper points out the relationship between the bottleneck and the minimum cutset of the network, and presents a capacity expansion algorithm of network optimization to solve the network bottleneck problem. The complexity of the algorithm is also analyzed. As required by the algorithm, some virtual sources are imported through the whole positive direction subsection in the network, in which a certain capacity value is given. Simultaneously, a corresponding capacity-expanded network is constructed to search all minimum cutsets. For a given maximum flow value of the network, the authors found an adjustment value of each minimum cutset arc's group with gradually reverse calculation and marked out the feasible flow on the capacity-extended networks again with the adjustment value increasing. All this has been done repeatedly until the original topology structure is resumed. So the algorithm can increase the capacity of networks effectively and solve the bottleneck problem of networks.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Estimation of Aerobic Capacity (VO2-max and Physical Work Capacity in Laborers
Sedeghe Hosseinabadi
2013-06-01
Full Text Available Introduction: Measurement of Maximum aerobic capacity (VO2-max is important in physiologically fitting the laborers to the job. This study was conducted to estimate the aerobic capacity and physical work capacity (PWC of workers of galvanize department of Semnan rolling pipe Company and also determine the relative frequency of workers whom their jobs were proportional to their physical work capacity . Methods: 50 male workers of Semnan rolling pipe company were selected randomly to participate in this cross-sectional study. Tuxworth & shahnavaz methods were applied to measure instances VO2-MAX. Independent-Sample t-test and correlation technique were used to analysis the data by SPSS software. Results: Average maximum aerobic capacity of workers was 2.88± .033 liters per minute and the average of physical work capacity was 4.76 ± 0.54 kilocalories per minute. There was a significant relationship between body mass index and aerobic capacity. The result showed that 36 percent of subjects expend more energy than their physical work capacity to perform their duties during the work time. Conclusion: According to the ILO classification, the average physical work capacity of the workers fall into a category of light energy;accordingly, on average, these workers had physical ability to performe less or lighter duties. More than one-third of these workers need to undergo job modification or to change their present job to a job with less energy consumption.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Christensen, Thomas Budde
.g. sustainability or quality of life. The purpose of this paper is to explore how and to what extent public sector interventions that aim at forcing cluster development in industries can support sustainable development as defined in the Brundtland tradition and more recently elaborated in such concepts as eco......, Portugal and New Zealand have adopted the concept. Public sector interventions that aim to support cluster development in industries most often focus upon economic policy goals such as enhanced employment and improved productivity, but rarely emphasise broader societal policy goals relating to e...... to the automotive sector in Wales. Specifically, the paper evaluates the "Accelerates" programme initiated by the Welsh Development Agency and elaborates on how and to what extent the Accelerate programme supports the development of a sustainable automotive industry cluster. The Accelerate programme was set up...
Discontinuous symplectic capacities
Zehmisch, K.; Ziltener, F.J.
2014-01-01
We show that the spherical capacity is discontinuous on a smooth family of ellipsoidal shells. Moreover, we prove that the shell capacity is discontinuous on a family of open sets with smooth connected boundaries.
Ryberg, Jesper
2014-01-01
That responsible moral agency presupposes certain mental capacities, constitutes a widely accepted view among theorists. Moreover, it is often assumed that degrees in the development of the relevant capacities co-vary with degrees of responsibility. In this article it is argued that, the move from...... the view that responsibility requires certain mental capacities to the position that degrees of responsibility co-vary with degrees of the development of the mental capacities, is premature....
Information storage capacity of incompletely connected associative memories.
Bosch, Holger; Kurfess, Franz J.
1998-07-01
In this paper, the memory capacity of incompletely connected associative memories is investigated. First, the capacity is derived for memories with fixed parameters. Optimization of the parameters yields a maximum capacity between 0.53 and 0.69 for hetero-association and half of it for autoassociation improving previously reported results. The maximum capacity grows with increasing connectivity of the memory and requires sparse input and output patterns. Further, parameters can be chosen in such a way that the information content per pattern asymptotically approaches 1 with growing size of the memory.
CDMA systems capacity engineering
Kim, Kiseon
2004-01-01
This new hands-on resource tackles capacity planning and engineering issues that are crucial to optimizing wireless communication systems performance. Going beyond the system physical level and investigating CDMA system capacity at the service level, this volume is the single-source for engineering and analyzing systems capacity and resources.
Deuterium cluster jet produced at moderate backing pressures
Hongbin Wang; Tianshu Wen; Yingling He; Chunye Jiao; Shuanggen Zhang; Xiangxian Wang; Fangfang Ge; Hongjie Liu; Guoquan Ni; Xiangdong Yang; Yuqiu Gu; Xianlun Wen; Weimin Zhou; Guangchang Wang
2006-01-01
@@ A deuterium cluster jet produced in the supersonic expansion into vacuum of deuterium gas at liquid nitrogen temperature and moderate backing pressures are studied by Rayleigh scattering techniques. The experimental results show that deuterium clusters can be created at moderate gas backing pressures ranging from 8 to 23 bar, and a maximum average cluster size of 350 atoms per cluster is estimated. The temporal evolution of the cluster jet generated at the backing pressure of 20 bar demonstrates a two-plateau structure. The possible mechanism responsible for this structure is discussed. The former plateau with higher average atom and cluster densities is more suitable for the general laser-cluster interaction experiments.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Joan Garriga
Full Text Available The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i minimize the need of supervision, (ii reduce computational costs, (iii minimize the need of prior assumptions (e.g. simple parametrizations, and (iv capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC, a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC, a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering. Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.
A CAPACITY EXPANSION PROBLEM WITHBUDGET CONSTRAINT AND BOTTLENECK LIMITATION
无
2001-01-01
This paper considers a capacity expansion problem with budget constraint.Suppose each edge in the network has two attributes: capacity and the degree of difficulty.The difficulty degree of a tree T is the maximum degree of difficulty of all edges in the tree and the cost for coping with the difficulty in a tree is a nondecreasing function about the difficulty degree of the tree. The authors need to increase capacities of some edges so that there is a spanning tree whose capacity can be increased to the maximum extent.meanwhile the total cost for increasing capacity as well as overcoming the difficulty in the spanning tree does not exceed a given budget D*. Suppose the cost for increasing capacity on each edge is a linear function about the increment of capacity, they transform this problem into solving some hybrid parametric spanning tree problems[1] and propose a strongly polynomial algorithm.
Christensen, Thomas Budde
The cluster theory attributed to Michael Porter has significantly influenced industrial policies in countries across Europe and North America since the beginning of the 1990s. Institutions such as the EU, OECD and the World Bank and governments in countries such as the UK, France, The Netherlands...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Quotients of cluster categories
Jorgensen, Peter
2007-01-01
Higher cluster categories were recently introduced as a generalization of cluster categories. This paper shows that in Dynkin types A and D, half of all higher cluster categories are actually just quotients of cluster categories. The other half can be obtained as quotients of 2-cluster categories, the "lowest" type of higher cluster categories. Hence, in Dynkin types A and D, all higher cluster phenomena are implicit in cluster categories and 2-cluster categories. In contrast, the same is not...
Small Business Administration — The Regional Innovation Clusters serve a diverse group of sectors and geographies. Three of the initial pilot clusters, termed Advanced Defense Technology clusters,...
STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS
无
2001-01-01
The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.
Landex, Alex
2011-01-01
Stations do have other challenges regarding capacity than open lines as it is here the traffic is dispatched. The UIC 406 capacity method that can be used to analyse the capacity consumption can be exposed in different ways at stations which may lead to different results. Therefore, stations need...... special focus when conducting UIC 406 capacity analyses.This paper describes how the UIC 406 capacity method can be expounded for stations. Commonly for the analyses of the stations it is recommended to include the entire station including the switch zone(s) and all station tracks. By including the switch...... is changed, this paper recommends that the railway lines are not always be divided. In case trains turn around on open (single track) line, the capacity consumption may be too low if a railway line is divided. The same can be the case if only few trains are overtaken at an overtaking station. For dead end...
Evaluation of railway capacity
Landex, Alex; Kaas, Anders H.; Schittenhelm, Bernd
2006-01-01
This paper describes the relatively new UIC 406 method for calculating capacity consumption on railway lines. The UIC 406 method is an easy and effective way of calculating the capacity consumption, but it is possible to expound the UIC 406 method in different ways which can lead to different...... capacity consumptions. This paper describes the UIC 406 method and how it is expounded in Denmark. The paper describes the importance of choosing the right length of the line sections examined and how line sections with multiple track sections are examined. Furthermore, the possibility of using idle...... capacity to run more trains is examined. The paper presents a method to examine the expected capacity utilization of future timetables. The method is based on the plan of operation instead of the exact (known) timetable. At the end of the paper it is described how it is possible to make capacity statements...
Poenaru, Dorin N.; Greiner, Walter
One of the rare examples of phenomena predicted before experimental discovery, offers the opportunity to introduce fission theory based on the asymmetric two center shell model. The valleys within the potential energy surfaces are due to the shell effects and are clearly showing why cluster radioactivity was mostly detected in parent nuclei leading to a doubly magic lead daughter. Saddle point shapes can be determined by solving an integro-differential equation. Nuclear dynamics allows us to calculate the half-lives. The following cluster decay modes (or heavy particle radioactivities) have been experimentally confirmed: 14C, 20O, 23F, 22,24-26Ne, 28,30Mg, 32,34Si with half-lives in good agreement with predicted values within our analytical superasymmetric fission model. The preformation probability is calculated as the internal barrier penetrability. An universal curve is described and used as an alternative for the estimation of the half-lives. The macroscopic-microscopic method was extended to investigate two-alpha accompanied fission and true ternary fission. The methods developed in nuclear physics are also adapted to study the stability of deposited atomic clusters on the planar surfaces.
Adam McCarty
2001-01-01
This report is the outcome of a study commissioned to examine the capacity building needs in Vietnam, and is a supplementary document to the Asian Development Bank's Country Operational Strategy for Vietnam. Vietnam's needs in terms of capacity building are particularly important given that is it a transitional economy and also one with little institutional experience in dealing with the international donor community. This paper examines the international awareness of capacity building and ca...
Quantum Confinement and Negative Heat Capacity
Serra, Pablo; Carignano, Marcelo; Alharbi, Fahhad; Kais, Sabre
2013-01-01
Thermodynamics dictates that the specific heat of a system is strictly non-negative. However, in finite classical systems there are well known theoretical and experimental cases where this rule is violated, in particular finite atomic clusters. Here, we show for the first time that negative heat capacity can also occur in finite quantum systems. The physical scenario on which this effect might be experimentally observed is discussed. Observing such an effect might lead to the design of new li...
Load Balancing Algorithm for Cache Cluster
刘美华; 古志民; 曹元大
2003-01-01
By the load definition of cluster, the request is regarded as granularity to compute load and implement the load balancing in cache cluster. First, the processing power of cache-node is studied from four aspects: network bandwidth, memory capacity, disk access rate and CPU usage. Then, the weighted load of cache-node is customized. Based on this, a load-balancing algorithm that can be applied to the cache cluster is proposed. Finally, Polygraph is used as a benchmarking tool to test the cache cluster possessing the load-balancing algorithm and the cache cluster with cache array routing protocol respectively. The results show the load-balancing algorithm can improve the performance of the cache cluster.
Light dependence of carboxylation capacity for C3 photosynthesis models
Photosynthesis at high light is often modelled by assuming limitation by the maximum capacity of Rubisco carboxylation at low carbon dioxide concentrations, by electron transport capacity at higher concentrations, and sometimes by triose-phosphate utilization rate at the highest concentrations. Pho...
Comaskey, Brian J.; Scheibner, Karl F.; Ault, Earl R.
2007-05-01
The heat capacity laser concept is extended to systems in which the heat capacity lasing media is a liquid. The laser active liquid is circulated from a reservoir (where the bulk of the media and hence waste heat resides) through a channel so configured for both optical pumping of the media for gain and for light amplification from the resulting gain.
Habekost, Thomas; Starrfelt, Randi
2009-01-01
to patient testing, and review existing TVA-based patient studies organized by lesion anatomy. Lesions in three anatomical regions affect visual capacity: The parietal lobes, frontal cortex and basal ganglia, and extrastriate cortex. Visual capacity thus depends on large, bilaterally distributed anatomical...
Sameni, Melody Khadem; Preston, John M.
2012-01-01
Growth in rail traffic has not been matched by increases in railway infrastructure. Given this capacity challenge and the current restrictions on public spending, the allocation and the utilization of existing railway capacity are more important than ever. Great Britain has had the greatest growth...... in rail passenger kilometers of European countries since 1996. However, costs are higher and efficiency is lower than European best practice. This paper provides an innovative methodology for assessing the efficiency of passenger operators in capacity utilization. Data envelopment analysis (DEA) is used...... to analyze the efficiency of operators in transforming inputs of allocated capacity of infrastructure and franchise payments into valuable passenger service outputs while avoiding delays. By addressing operational and economic aspects of capacity utilization simultaneously, the paper deviates from existing...
ON THE BOTTLENECK CAPACITY EXPANSION PROBLEMS ON NETWORKS
Yang Chao; Zhang Jianzhong
2006-01-01
This article considers a class of bottleneck capacity expansion problems. Such problems aim to enhance bottleneck capacity to a certain level with minimum cost. Given a network G(V, A,(-C)) consisting of a set of nodes V = {v1, v2,………, vn}, a set of arcs A(∪-){(vi, vj) | i = 1, 2,………, n; j = 1, 2,………, n} and a capacity vector (-C). The component (-C)ij of (-C) is the capacity of arc (vi, vj). Define the capacity of a subset A' of A as the minimum capacity of the arcs in A, the capacity of a family F of subsets of A is the maximum capacity of its members. There axe two types of expanding models. In the arc-expanding model, the unit cost to increase the capacity of arc (vi, vj) is wij. In the node-expanding model, it is assumed that the capacities of all arcs (vi, vi) which start at the same node vi should be increased by the same amount and that the unit cost to make such expansion is wi. This article considers three kinds of bottleneck capacity expansion problems (path,spanning arborescence and maximum flow) in both expanding models. For each kind of expansion problems, this article discusses the characteristics of the problems and presents several results on the complexity of the problems.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Capacity of Discrete Molecular Diffusion Channels
Einolghozati, Arash; Beirami, Ahmad; Fekri, Faramarz
2011-01-01
In diffusion-based molecular communications, messages can be conveyed via the variation in the concentration of molecules in the medium. In this paper, we intend to analyze the achievable capacity in transmission of information from one node to another in a diffusion channel. We observe that because of the molecular diffusion in the medium, the channel possesses memory. We then model the memory of the channel by a two-step Markov chain and obtain the equations describing the capacity of the diffusion channel. By performing a numerical analysis, we obtain the maximum achievable rate for different levels of the transmitter power, i.e., the molecule production rate.
Saeed, Faisal; Salim, Naomie; Abdo, Ammar
2013-07-01
Many consensus clustering methods have been applied in different areas such as pattern recognition, machine learning, information theory and bioinformatics. However, few methods have been used for chemical compounds clustering. In this paper, an information theory and voting based algorithm (Adaptive Cumulative Voting-based Aggregation Algorithm A-CVAA) was examined for combining multiple clusterings of chemical structures. The effectiveness of clusterings was evaluated based on the ability of the clustering method to separate active from inactive molecules in each cluster, and the results were compared with Ward's method. The chemical dataset MDL Drug Data Report (MDDR) and the Maximum Unbiased Validation (MUV) dataset were used. Experiments suggest that the adaptive cumulative voting-based consensus method can improve the effectiveness of combining multiple clusterings of chemical structures.
Evolution of Nuclear Star Clusters
Merritt, David
2008-01-01
Two-body relaxation times of nuclear star clusters are short enough that gravitational encounters should substantially affect their structure in 10 Gyr or less. In nuclear star clusters without massive black holes, dynamical evolution is a competition between core collapse, which causes densities to increase, and heat input from the surrounding galaxy, which causes densities to decrease. The maximum extent of a nucleus that can resist expansion is derived numerically for a wide range of initial conditions; observed nuclei are shown to be compact enough to resist expansion, although there may have been an earlier generation of low-density nuclei that were dissolved. An evolutionary model for NGC 205 is presented which suggests that the nucleus of this galaxy has already undergone core collapse. Adding a massive black hole to a nucleus inhibits core collapse, and nuclear star clusters with black holes always expand, due primarily to heat input from the galaxy. The expansion rate is smaller for larger black hole...
An Automatic Clustering Technique for Optimal Clusters
Pavan, K Karteeka; Rao, A V Dattatreya; 10.5121/ijcsea.2011.1412
2011-01-01
This paper proposes a simple, automatic and efficient clustering algorithm, namely, Automatic Merging for Optimal Clusters (AMOC) which aims to generate nearly optimal clusters for the given datasets automatically. The AMOC is an extension to standard k-means with a two phase iterative procedure combining certain validation techniques in order to find optimal clusters with automation of merging of clusters. Experiments on both synthetic and real data have proved that the proposed algorithm finds nearly optimal clustering structures in terms of number of clusters, compactness and separation.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Maximum likelihood method and Fisher's information in physics and econophysics
Syska, Jacek
2012-01-01
Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models cl...
Pullout capacity of batter pile in sand.
Nazir, Ashraf; Nasr, Ahmed
2013-03-01
Many offshore structures are subjected to overturning moments due to wind load, wave pressure, and ship impacts. Also most of retaining walls are subjected to horizontal forces and bending moments, these forces are due to earth pressure. For foundations in such structures, usually a combination of vertical and batter piles is used. Little information is available in the literature about estimating the capacity of piles under uplift. In cases where these supporting piles are not vertical, the behavior under axial pullout is not well established. In order to delineate the significant variables affecting the ultimate uplift shaft resistance of batter pile in dry sand, a testing program comprising 62 pullout tests was conducted. The tests are conducted on model steel pile installed in loose, medium, and dense sand to an embedded depth ratio, L/d, vary from 7.5 to 30 and with various batter angles of 0°, 10°, 20°, and 30°. Results indicate that the pullout capacity of a batter pile constructed in dense and/or medium density sand increases with the increase of batter angle attains maximum value and then decreases, the maximum value of Pα occurs at batter angle approximately equal to 20°, and it is about 21-31% more than the vertical pile capacity, while the pullout capacity for batter pile that constructed in loose sand decreases with the increase of pile inclination. The results also indicated that the circular pile is more resistant to pullout forces than the square and rectangular pile shape. The rough model piles tested is experienced 18-75% increase in capacity compared with the smooth model piles. The suggested relations for the pullout capacity of batter pile regarding the vertical pile capacity are well predicted.
Pullout capacity of batter pile in sand
Ashraf Nazir
2013-03-01
Full Text Available Many offshore structures are subjected to overturning moments due to wind load, wave pressure, and ship impacts. Also most of retaining walls are subjected to horizontal forces and bending moments, these forces are due to earth pressure. For foundations in such structures, usually a combination of vertical and batter piles is used. Little information is available in the literature about estimating the capacity of piles under uplift. In cases where these supporting piles are not vertical, the behavior under axial pullout is not well established. In order to delineate the significant variables affecting the ultimate uplift shaft resistance of batter pile in dry sand, a testing program comprising 62 pullout tests was conducted. The tests are conducted on model steel pile installed in loose, medium, and dense sand to an embedded depth ratio, L/d, vary from 7.5 to 30 and with various batter angles of 0°, 10°, 20°, and 30°. Results indicate that the pullout capacity of a batter pile constructed in dense and/or medium density sand increases with the increase of batter angle attains maximum value and then decreases, the maximum value of Pα occurs at batter angle approximately equal to 20°, and it is about 21–31% more than the vertical pile capacity, while the pullout capacity for batter pile that constructed in loose sand decreases with the increase of pile inclination. The results also indicated that the circular pile is more resistant to pullout forces than the square and rectangular pile shape. The rough model piles tested is experienced 18–75% increase in capacity compared with the smooth model piles. The suggested relations for the pullout capacity of batter pile regarding the vertical pile capacity are well predicted.
Vedr.: Military capacity building
Larsen, Josefine Kühnel; Struwe, Lars Bangert
2013-01-01
Military capacity building has increasingly become an integral part of Danish defence. Military capacity is a new way of thinking Danish defence and poses a new set of challenges and opportunities for the Danish military and the Political leadership. On the 12th of december, PhD. Candidate Josefine...... Kühnel Larsen and researcher Lars Bangert Struwe of CMS had organized a seminar in collaboration with Royal Danish Defense Colleg and the East African Security Governance Network. The seminar focused on some of the risks involved in Military capacity building and how these risks are dealt with from...
Vedr.: Military capacity building
Larsen, Josefine Kühnel; Struwe, Lars Bangert
2013-01-01
Kühnel Larsen and researcher Lars Bangert Struwe of CMS had organized a seminar in collaboration with Royal Danish Defense Colleg and the East African Security Governance Network. The seminar focused on some of the risks involved in Military capacity building and how these risks are dealt with from......Military capacity building has increasingly become an integral part of Danish defence. Military capacity is a new way of thinking Danish defence and poses a new set of challenges and opportunities for the Danish military and the Political leadership. On the 12th of december, PhD. Candidate Josefine...
Power Evaluation of Focused Cluster Tests.
Puett, Rc; Lawson, Ab; Clark, Ab; Hebert, Jr; Kulldorff, M
2010-09-01
Many statistical tests have been developed to assess the significance of clusters of disease located around known sources of environmental contaminants, also known as focused disease clusters. The majority of focused-cluster tests were designed to detect a particular spatial pattern of clustering, one in which the disease cluster centers around the pollution source and declines in a radial fashion with distance. However, other spatial patterns of environmentally related disease clusters are likely given that the spatial dispersion patterns of environmental contaminants, and thus human exposure, depend on a number of factors (i.e., meteorology and topography). For this study, data were simulated with five different spatial patterns of disease clusters, reflecting potential pollutant dispersion scenarios: 1) a radial effect decreasing with increasing distance, 2) a radial effect with a defined peak and decreasing with distance, 3) a simple angular effect, 4) an angular effect decreasing with increasing distance and 5) an angular effect with a defined peak and decreasing with distance. The power to detect each type of spatially distributed disease cluster was evaluated using Stone's Maximum Likelihood Ratio Test, Tango's Focused Test, Bithell's Linear Risk Score Test, and variations of the Lawson-Waller Score Test. Study findings underscore the importance of considering environmental contaminant dispersion patterns, particularly directional effects, with respect to focused-cluster test selection in cluster investigations. The effect of extra variation in risk also is considered, although its effect is not substantial in terms of the power of tests.
Walker Damian
2007-07-01
Full Text Available Abstract Background Chile is currently undergoing a period of rapid demographic transition which has led to an increase in the proportion of older people in the population; the proportion aged 60 years and over, for example, increased from 8% of the population in 1980 to 12% in 2005. In an effort to promote healthy ageing and preserve function, the government of Chile has formulated a package of actions into the Programme of Complementary Feeding for the Older Population (PACAM which has been providing a nutritional supplement to older people since 1998. PACAM distributes micronutrient fortified foods to individuals aged 70 years and over registered at Primary Health Centres and enrolled in the programme. The recommended serving size (50 g/day of these supplements provides 50% of daily micronutrient requirements and 20% of daily energy requirements of older people. No information is currently available on the cost-effectiveness of the supplementation programme. Aim The aim of the CENEX cluster randomised controlled trial is to evaluate the cost-effectiveness of an ongoing nutrition supplementation programme, and a specially designed physical exercise intervention for older people of low to medium socio-economic status living in Santiago, Chile. Methods The study has been conceptualised as a public health programme effectiveness study and has been designed as a 24-month factorial cluster-randomised controlled trial conducted among 2800 individuals aged 65.0–67.9 years at baseline attending 28 health centres in Santiago. The main outcomes are incidence of pneumonia, walking capacity and change in body mass index over 24 months of intervention. Costing data (user and provider, collected at all levels, will enable the determination of the cost-effectiveness of the two interventions individually and in combination. The study is supported by the Ministry of Health in Chile, which is keen to expand and improve its national programme of nutrition for
Triadic conceptual structure of the maximum entropy approach to evolution.
Herrmann-Pillath, Carsten; Salthe, Stanley N
2011-03-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.
Mielke, U.
1979-01-01
We measured in 287 persons the pulmonary CO diffusion capacity with the steady-state and the single breath methods, applying apnoeic periods of 4 and 10 seconds duration. The aspects methodical significance, polyclinical applicability and pathognostic relevance with respect to other approved pulmonary functional tests are discussed. Differing pulmonary diffusion capacity values found in normal persons or in patients suffering from silicosis, pulmonary fibrosis, Boeck's disease or rheumatoid arthritis, were investigated and critically evaluated.
Revisiting Absorptive Capacity
de Araújo, Ana Luiza Lara; Ulhøi, John Parm; Lettl, Christopher
Absorptive capacity has mostly been perceived as a 'passive' outcome of R&D investments. Recently, however, a growing interest into its 'proactive' potentials has emerged. This paper taps into this development and proposes a dynamic model for conceptualizing the determinants of the complementary...... learning processes of absorptive capacity, which comprise combinative and adaptive capabilities. Drawing on survey data (n=169), the study concludes that combinative capabilities primarily enhance transformative and exploratory learning processes, while adaptive capabilities strengthen all three learning...
Revisiting Absorptive Capacity
de Araújo, Ana Luiza Lara; Ulhøi, John Parm; Lettl, Christopher
Absorptive capacity has mostly been perceived as a 'passive' outcome of R&D investments. Recently, however, a growing interest into its 'proactive' potentials has emerged. This paper taps into this development and proposes a dynamic model for conceptualizing the determinants of the complementary...... learning processes of absorptive capacity, which comprise combinative and adaptive capabilities. Drawing on survey data (n=169), the study concludes that combinative capabilities primarily enhance transformative and exploratory learning processes, while adaptive capabilities strengthen all three learning...
Detecting Clusters in Atom Probe Data with Gaussian Mixture Models.
Zelenty, Jennifer; Dahl, Andrew; Hyde, Jonathan; Smith, George D W; Moody, Michael P
2017-04-01
Accurately identifying and extracting clusters from atom probe tomography (APT) reconstructions is extremely challenging, yet critical to many applications. Currently, the most prevalent approach to detect clusters is the maximum separation method, a heuristic that relies heavily upon parameters manually chosen by the user. In this work, a new clustering algorithm, Gaussian mixture model Expectation Maximization Algorithm (GEMA), was developed. GEMA utilizes a Gaussian mixture model to probabilistically distinguish clusters from random fluctuations in the matrix. This machine learning approach maximizes the data likelihood via expectation maximization: given atomic positions, the algorithm learns the position, size, and width of each cluster. A key advantage of GEMA is that atoms are probabilistically assigned to clusters, thus reflecting scientifically meaningful uncertainty regarding atoms located near precipitate/matrix interfaces. GEMA outperforms the maximum separation method in cluster detection accuracy when applied to several realistically simulated data sets. Lastly, GEMA was successfully applied to real APT data.
Tidally Induced Elongation and Alignments of Galaxy Clusters
Salvador-Solé, E; Salvador-Sole, Eduard; Solanes, Jose M.
1993-01-01
We show that tidal interaction among galaxy clusters can account for their observed alignments and very marked elongation and, consequently, that these characteristics of clusters are actually consistent with them being formed in hierarchical clustering. The well-established distribution of projected axial ratios of clusters with richness class $R\\ge 0$ is recovered very satisfactorily by means of a simple model with no free parameters. The main perturbers are relatively rich ($R\\ge 1$) single clusters and/or groups of clusters (superclusters) of a wider richness class ($R\\ge 0$) located within a distance of about 65 $h^{-1}$ Mpc from the perturbed cluster. This makes the proposed scheme be also consistent with all reported alignment effects involving clusters. We find that this tidal interaction is typically in the saturate regime (\\ie the maximum elongation allowed for systems in equilibrium is reached), which explains the very similar intrinsic axial ratio shown by all clusters. Tides would therefore play ...
Heavy hitters via cluster-preserving clustering
Larsen, Kasper Green; Nelson, Jelani; Nguyen, Huy L.
2016-01-01
, providing correctness whp. In fact, a simpler version of our algorithm for p = 1 in the strict turnstile model answers queries even faster than the "dyadic trick" by roughly a log n factor, dominating it in all regards. Our main innovation is an efficient reduction from the heavy hitters to a clustering...... problem in which each heavy hitter is encoded as some form of noisy spectral cluster in a much bigger graph, and the goal is to identify every cluster. Since every heavy hitter must be found, correctness requires that every cluster be found. We thus need a "cluster-preserving clustering" algorithm......, that partitions the graph into clusters with the promise of not destroying any original cluster. To do this we first apply standard spectral graph partitioning, and then we use some novel combinatorial techniques to modify the cuts obtained so as to make sure that the original clusters are sufficiently preserved...
Alkaline solution neutralization capacity of soil.
Asakura, Hiroshi; Sakanakura, Hirofumi; Matsuto, Toshihiko
2010-10-01
Alkaline eluate from municipal solid waste (MSW) incineration residue deposited in landfill alkalizes waste and soil layers. From the viewpoint of accelerating stability and preventing heavy metal elution, pH of the landfill layer (waste and daily cover soil) should be controlled. On the other hand, pH of leachate from existing MSW landfill sites is usually approximately neutral. One of the reasons is that daily cover soil can neutralize alkaline solution containing Ca(2+) as cation. However, in landfill layer where various types of wastes and reactions should be taken into consideration, the ability to neutralize alkaline solutions other than Ca(OH)(2) by soil should be evaluated. In this study, the neutralization capacities of various types of soils were measured using Ca(OH)(2) and NaOH solutions. Each soil used in this study showed approximately the same capacity to neutralize both alkaline solutions of Ca(OH)(2) and NaOH. The cation exchange capacity was less than 30% of the maximum alkali neutralization capacity obtained by the titration test. The mechanism of neutralization by the pH-dependent charge can explain the same neutralization capacities of the soils. Although further investigation on the neutralization capacity of the soils for alkaline substances other than NaOH is required, daily cover soil could serve as a buffer zone for alkaline leachates containing Ca(OH)(2) or other alkaline substances.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
MEASUREMENT OF SPECIFIC HEAT CAPACITY OF SALTSTONE
Harbour, J; Vickie Williams, V
2008-09-29
One of the goals of the Saltstone variability study is to identify (and quantify the impact of) the operational and compositional variables that control or influence the important processing and performance properties of Saltstone grout mixtures. The heat capacity of the Saltstone waste form is one of the important properties of Saltstone mixes that was last measured at SRNL in 1997. It is therefore important to develop a core competency for rapid and accurate analysis of the specific heat capacity of the Saltstone mixes in order to quantify the impact of compositional and operational variations on this property as part of the variability study. The heat capacity, coupled with the heat of hydration data obtained from isothermal calorimetry for a given Saltstone mix, can be used to predict the maximum temperature increase in the cells within the vaults of the Saltstone Disposal Facility (SDF). The temperature increase controls the processing rate and the pour schedule. The maximum temperature is also important to the performance properties of the Saltstone. For example, in mass pours of concrete or grout of which Saltstone is an example, the maximum temperature increase and the maximum temperature difference (between the surface and the hottest location) are controlled to ensure durability of the product and prevent or limit the cracking caused by the thermal gradients produced during curing. This report details the development and implementation of a method for the measurement of the heat capacities of Saltstone mixes as well as the heat capacities of the cementitious materials of the premix and the simulated salt solutions used to batch the mixes. The developed method utilizes the TAM Air isothermal calorimeter and takes advantage of the sophisticated heat flow measurement capabilities of the instrument. Standards and reference materials were identified and used to validate the procedure and ensure accuracy of testing. Heat capacities of Saltstone mixes were
Healthy adults maximum oxygen uptake prediction from a six minute walking test
Nury Nusdwinuringtyas
2011-08-01
Full Text Available Background: A parameter is needed in medical activities or services to determine functional capacity. This study is aimed to produce functional capacity parameter for Indonesian adult as maximum O2.Methods: This study used 123 Indonesian healthy adult subjects (58 males and 65 females with a sedentary lifestyle, using a cross-sectional method.Results: Designed by using the followings: distance, body height, body weight, sex, age, maximum heart rate of six minute walking test and lung capacity (FEV and FVC, the study revealed a good correlation (except body weight with maximum O2. Three new formulas were proposed, which consisted of eight, six, and five variable respectively. Test of the new formula gave result of maximum O2 that is relevant to the golden standard maximum O2 using Cosmed® C-Pex.Conclusion: The Nury formula is the appropriate predictor of maximum oxygen uptake for healthy Indonesians adult as it is designed using Indonesian subjects (Mongoloid compared to the Cahalin’s formula (Caucasian. The Nury formula which consists of five variables is more applicable because it does not require any measurement tools neither specific competency. (Med J Indones 2011;20:195-200Keywords: maximum O2, Nury’s formula, six minute walking test
Ducros Anne
2008-07-01
Full Text Available Abstract Cluster headache (CH is a primary headache disease characterized by recurrent short-lasting attacks (15 to 180 minutes of excruciating unilateral periorbital pain accompanied by ipsilateral autonomic signs (lacrimation, nasal congestion, ptosis, miosis, lid edema, redness of the eye. It affects young adults, predominantly males. Prevalence is estimated at 0.5–1.0/1,000. CH has a circannual and circadian periodicity, attacks being clustered (hence the name in bouts that can occur during specific months of the year. Alcohol is the only dietary trigger of CH, strong odors (mainly solvents and cigarette smoke and napping may also trigger CH attacks. During bouts, attacks may happen at precise hours, especially during the night. During the attacks, patients tend to be restless. CH may be episodic or chronic, depending on the presence of remission periods. CH is associated with trigeminovascular activation and neuroendocrine and vegetative disturbances, however, the precise cautive mechanisms remain unknown. Involvement of the hypothalamus (a structure regulating endocrine function and sleep-wake rhythms has been confirmed, explaining, at least in part, the cyclic aspects of CH. The disease is familial in about 10% of cases. Genetic factors play a role in CH susceptibility, and a causative role has been suggested for the hypocretin receptor gene. Diagnosis is clinical. Differential diagnoses include other primary headache diseases such as migraine, paroxysmal hemicrania and SUNCT syndrome. At present, there is no curative treatment. There are efficient treatments to shorten the painful attacks (acute treatments and to reduce the number of daily attacks (prophylactic treatments. Acute treatment is based on subcutaneous administration of sumatriptan and high-flow oxygen. Verapamil, lithium, methysergide, prednisone, greater occipital nerve blocks and topiramate may be used for prophylaxis. In refractory cases, deep-brain stimulation of the
Estimation of Maximum Allowable PV Connection to LV Residential Power Networks
Demirok, Erhan; Sera, Dezso; Teodorescu, Remus
2011-01-01
transformer or using solar inverters with new grid support features. This study presents a methodology for the estimation of maximum PV hosting capacity including IEC 60076-7 based thermal model of distribution transformer. Certain part of a real distribution network of Braedstrup suburban area in Denmark...... is used in simulation as a case study model. Furthermore, varying solutions (utilizing thermally upgraded insulation paper in transformers, reactive power services from solar inverters, etc.) are implemented on the network under investigation to examine PV penetration level and finally key results learnt......Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...
X-ray Spectroscopy of Cooling Cluster
Peterson, J.R.; /SLAC; Fabian, A.C.; /Cambridge U., Inst. of Astron.
2006-01-17
We review the X-ray spectra of the cores of clusters of galaxies. Recent high resolution X-ray spectroscopic observations have demonstrated a severe deficit of emission at the lowest X-ray temperatures as compared to that expected from simple radiative cooling models. The same observations have provided compelling evidence that the gas in the cores is cooling below half the maximum temperature. We review these results, discuss physical models of cooling clusters, and describe the X-ray instrumentation and analysis techniques used to make these observations. We discuss several viable mechanisms designed to cancel or distort the expected process of X-ray cluster cooling.
Identify Implicit Communities by Graph Clustering
YANG Nan; MENG Xiaofeng
2006-01-01
How to find these communities is an important research work. Recently, community discovery are mainly categorized to HITS algorithm, bipartite cores algorithm and maximum flow/minimum cut framework. In this paper, we proposed a new method to extract communities. The MCL algorithm, which is short for the Markov Cluster Algorithm, a fast and scalable unsupervised cluster algorithm is used to extract communities. By putting mirror deleting procedure behind graph clustering, we decrease comparing cost considerably. After MCL and mirror deletion, we use community member select algorithm to produce the sets of community candidates. The experiment and results show the new method works effectively and properly.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Clustering and Community Detection with Imbalanced Clusters
Aksoylar, Cem; Qian, Jing; Saligrama, Venkatesh
2016-01-01
Spectral clustering methods which are frequently used in clustering and community detection applications are sensitive to the specific graph constructions particularly when imbalanced clusters are present. We show that ratio cut (RCut) or normalized cut (NCut) objectives are not tailored to imbalanced cluster sizes since they tend to emphasize cut sizes over cut values. We propose a graph partitioning problem that seeks minimum cut partitions under minimum size constraints on partitions to de...
Geothermal Plant Capacity Factors
Greg Mines; Jay Nathwani; Christopher Richard; Hillary Hanson; Rachel Wood
2015-01-01
The capacity factors recently provided by the Energy Information Administration (EIA) indicated this plant performance metric had declined for geothermal power plants since 2008. Though capacity factor is a term commonly used by geothermal stakeholders to express the ability of a plant to produce power, it is a term frequently misunderstood and in some instances incorrectly used. In this paper we discuss how this capacity factor is defined and utilized by the EIA, including discussion on the information that the EIA requests from operations in their 923 and 860 forms that are submitted both monthly and annually by geothermal operators. A discussion is also provided regarding the entities utilizing the information in the EIA reports, and how those entities can misinterpret the data being supplied by the operators. The intent of the paper is to inform the facility operators as the importance of the accuracy of the data that they provide, and the implications of not providing the correct information.
Capacity Maximizing Constellations
Barsoum, Maged; Jones, Christopher
2010-01-01
Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity
Jensen, Thomas Christian
2014-01-01
The paper presents estimations of the effect of bad weather on the observed speed on a Danish highway section; Køge Bugt Motorvejen. The paper concludes that weather, primarily precipitation and snow, has a clear negative effect on speed when the road is not in hypercongestion mode. Furthermore......, the capacity of the highway seems to be reduced in bad weather and there are indications that travel time variability is also increased, at least in free-flow conditions. Heavy precipitation reduces speed and capacity by around 5-8%, whereas snow primarily reduces capacity. Other weather variables......-parametrically against traffic density and in step 2 the residuals from step 1 are regressed linearly against the weather variables. The choice of a non-parametric method is made to avoid constricting ties from a parametric specification and because the focus here is not on the relationship between traffic flow...
Ryan, R E; Ryan, R E
1989-12-01
The patient with cluster headaches will be afflicted with the most severe type of pain that one will encounter. If the physician can do something to help this patient either by symptomatic or, more importantly, prophylactic treatment, he or she will have a most thankful patient. This type of headache is seen most frequently in men, and occurs in a cyclic manner. During an acute cycle, the patient will experience a daily type of pain that may occur many times per day. The pain is usually unilateral and may be accompanied by unilateral lacrimation, conjunctivitis, and clear rhinorrhea. Prednisone is the first treatment we employ. Patients are seen for follow-up approximately twice a week, and their medication is lowered in an appropriate manner, depending on their response to the treatment. Regulation of dosage has to be individualized, and when one reaches the lower dose such as 5 to 10 mg per day, the drug may have to be tapered more slowly, or even maintained at that level for a period of time to prevent further recurrence of symptoms. We frequently will use an intravenous histamine desensitization technique to prevent further attacks. We will give the patient an ergotamine preparation to use for symptomatic relief. As these patients often have headaches during the middle of the night, we will place the patient on a 2-mg ergotamine preparation to take prior to going to bed in the evening. This often works in a prophylactic nature, and prevents the nighttime occurrence of a headache. We believe that following these principles to make the accurate diagnosis and institute the proper therapy will help the practicing otolaryngologist recognize and treat patients suffering from this severe pain.
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Maximum flow-based resilience analysis: From component to system
Jin, Chong; Li, Ruiying; Kang, Rui
2017-01-01
Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135
Testamentary capacity and delirium.
Liptzin, Benjamin; Peisah, Carmelle; Shulman, Kenneth; Finkel, Sanford
2010-09-01
With the aging of the population there will be a substantial transfer of wealth in the next 25 years. The presence of delirium can complicate the evaluation of an older person's testamentary capacity and susceptibility to undue influence but has not been well examined in the existing literature. A subcommittee of the IPA Task Force on Testamentary Capacity and Undue Influence undertook to review how to assess prospectively and retrospectively testamentary capacity and susceptibility to undue influence in patients with delirium. The subcommittee identified questions that should be asked in cases where someone changes their will or estate plan towards the end of their life in the presence of delirium. These questions include: was there consistency in the patient's wishes over time? Were these wishes expressed during a "lucid interval" when the person was less confused? Were the patient's wishes clearly expressed in response to open-ended questions? Is there clear documentation of the patient's mental status at the time of the discussion? This review with some case examples provides guidance on how to consider the question of testamentary capacity or susceptibility to undue influence in someone undergoing an episode of delirium.
Flood Bypass Capacity Optimization
Siclari, A.; Hui, R.; Lund, J. R.
2015-12-01
Large river flows can damage adjacent flood-prone areas, by exceeding river channel and levee capacities. Particularly large floods are difficult to contain in leveed river banks alone. Flood bypasses often can efficiently reduce flood risks, where excess river flow is diverted over a weir to bypasses, that incur much less damage and cost. Additional benefits of bypasses include ecosystem protection, agriculture, groundwater recharge and recreation. Constructing or expanding an existing bypass costs in land purchase easements, and levee setbacks. Accounting for such benefits and costs, this study develops a simple mathematical model for optimizing flood bypass capacity using benefit-cost and risk analysis. Application to the Yolo Bypass, an existing bypass along the Sacramento River in California, estimates optimal capacity that economically reduces flood damage and increases various benefits, especially for agriculture. Land availability is likely to limit bypass expansion. Compensation for landowners could relax such limitations. Other economic values could affect the optimal results, which are shown by sensitivity analysis on major parameters. By including land geography into the model, location of promising capacity expansions can be identified.
Tortora, Cristina; Summa, Mireille Gettler
2011-01-01
Factorial clustering methods have been developed in recent years thanks to the improving of computational power. These methods perform a linear transformation of data and a clustering on transformed data optimizing a common criterion. Factorial PD-clustering is based on Probabilistic Distance clustering (PD-clustering). PD-clustering is an iterative, distribution free, probabilistic, clustering method. Factorial PD-clustering make a linear transformation of original variables into a reduced number of orthogonal ones using a common criterion with PD-Clustering. It is demonstrated that Tucker 3 decomposition allows to obtain this transformation. Factorial PD-clustering makes alternatively a Tucker 3 decomposition and a PD-clustering on transformed data until convergence. This method could significantly improve the algorithm performance and allows to work with large dataset, to improve the stability and the robustness of the method.
Possibilistic Exponential Fuzzy Clustering
Kiatichai Treerattanapitak; Chuleerat Jaruskulchai
2013-01-01
Generally,abnormal points (noise and outliers) cause cluster analysis to produce low accuracy especially in fuzzy clustering.These data not only stay in clusters but also deviate the centroids from their true positions.Traditional fuzzy clustering like Fuzzy C-Means (FCM) always assigns data to all clusters which is not reasonable in some circumstances.By reformulating objective function in exponential equation,the algorithm aggressively selects data into the clusters.However noisy data and outliers cannot be properly handled by clustering process therefore they are forced to be included in a cluster because of a general probabilistic constraint that the sum of the membership degrees across all clusters is one.In order to improve this weakness,possibilistic approach relaxes this condition to improve membership assignment.Nevertheless,possibilistic clustering algorithms generally suffer from coincident clusters because their membership equations ignore the distance to other clusters.Although there are some possibilistic clustering approaches that do not generate coincident clusters,most of them require the right combination of multiple parameters for the algorithms to work.In this paper,we theoretically study Possibilistic Exponential Fuzzy Clustering (PXFCM) that integrates possibilistic approach with exponential fuzzy clustering.PXFCM has only one parameter and not only partitions the data but also filters noisy data or detects them as outliers.The comprehensive experiments show that PXFCM produces high accuracy in both clustering results and outlier detection without generating coincident problems.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Wind farms model aggregation using probabilistic clustering
Fernandes, Paula Odete; Ferreira, Ángela Paula
2013-10-01
The main objective of this research is the identification of homogeneous groups within wind farms of a major operator playing in the energy sector in Portugal, based on two multivariate analyses: Hierarchical Cluster Analysis and Discriminant Analysis, by using two independent variables: annual liquid hours and net production. From the produced outputs there were identified three homogenous groups of wind farms: (1) medium Installed Capacity and Induction Generator based Technology, (2) high Installed Capacity and Synchronous Generator based Technology and (3) medium Installed Capacity and Synchronous Generator based Technology, which includes the wind farms with the higher annual liquid hours. It has been found that the results obtained by cluster analysis are well classified, with a total percentage of correct classification of 97,1%, which can be considered excellent.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.
Maxim Nikolaievich Shokhirev
Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.
Energy Efficient Cluster Based Scheduling Scheme for Wireless Sensor Networks.
Janani, E Srie Vidhya; Kumar, P Ganesh
2015-01-01
The energy utilization of sensor nodes in large scale wireless sensor network points out the crucial need for scalable and energy efficient clustering protocols. Since sensor nodes usually operate on batteries, the maximum utility of network is greatly dependent on ideal usage of energy leftover in these sensor nodes. In this paper, we propose an Energy Efficient Cluster Based Scheduling Scheme for wireless sensor networks that balances the sensor network lifetime and energy efficiency. In the first phase of our proposed scheme, cluster topology is discovered and cluster head is chosen based on remaining energy level. The cluster head monitors the network energy threshold value to identify the energy drain rate of all its cluster members. In the second phase, scheduling algorithm is presented to allocate time slots to cluster member data packets. Here congestion occurrence is totally avoided. In the third phase, energy consumption model is proposed to maintain maximum residual energy level across the network. Moreover, we also propose a new packet format which is given to all cluster member nodes. The simulation results prove that the proposed scheme greatly contributes to maximum network lifetime, high energy, reduced overhead, and maximum delivery ratio.
Tina M Briere; Marcel H F Sluiter; Vijay Kumar; Yoshiyuki Kawazoe
2003-01-01
The geometries of several Mn clusters in the size range Mn13–Mn23 are studied via the generalized gradient approximation to density functional theory. For the 13- and 19-atom clusters, the icosahedral structures are found to be most stable, while for the 15-atom cluster, the bcc structure is more favoured. The clusters show ferrimagnetic spin configurations.
Dissolution of Globular Clusters
Baumgardt, Holger
2006-01-01
Globular clusters are among the oldest objects in galaxies, and understanding the details of their formation and evolution can bring valuable insight into the early history of galaxies. This review summarises the current knowledge about the dissolution of star clusters and discusses the implications of star cluster dissolution for the evolution of the mass function of star cluster systems in galaxies.
Clustering of correlated networks
Dorogovtsev, S. N.
2003-01-01
We obtain the clustering coefficient, the degree-dependent local clustering, and the mean clustering of networks with arbitrary correlations between the degrees of the nearest-neighbor vertices. The resulting formulas allow one to determine the nature of the clustering of a network.
Palladium clusters deposited on the heterogeneous substrates
Wang, Kun; Liu, Juanfang; Chen, Qinghua
2016-07-01
To improve the performance of the Pd composite membrane prepared by the cold spraying technology, it is extremely essential to give insights into the deposition process of the cluster and the heterogeneous deposition of the big Pd cluster at the different incident velocities on the atomic level. The deposition behavior, morphologies, energetic and interfacial configuration were examined by the molecular dynamic simulation and characterized by the cluster flattening ratio, the substrate maximum local temperature, the atom-embedded layer number and the surface-alloy formation. According to the morphology evolution, three deposition stages and the corresponding structural and energy evolution were clearly identified. The cluster deformation and penetrating depth increased with the enhancement of the incident velocity, but the increase degree also depended on the substrate hardness. The interfacial interaction between the cluster and the substrate can be improved by the higher substrate local temperature. Furthermore, it is found that the surface alloys were formed by exchanging sites between the cluster and substrate atoms, and the cluster atoms rearranged following as the substrate lattice arrangement from bottom to up in the deposition course. The ability and scope of the structural reconstruction are largely determined by both the size and incident energy of the impacted cluster.
Enhanced momentum feedback from clustered supernovae
Gentry, Eric S.; Krumholz, Mark R.; Dekel, Avishai; Madau, Piero
2017-02-01
Young stars typically form in star clusters, so the supernovae (SNe) they produce are clustered in space and time. This clustering of SNe may alter the momentum per SN deposited in the interstellar medium (ISM) by affecting the local ISM density, which in turn affects the cooling rate. We study the effect of multiple SNe using idealized 1D hydrodynamic simulations which explore a large parameter space of the number of SNe, and the background gas density and metallicity. The results are provided as a table and an analytic fitting formula. We find that for clusters with up to ˜100 SNe, the asymptotic momentum scales superlinearly with the number of SNe, resulting in a momentum per SN which can be an order of magnitude larger than for a single SN, with a maximum efficiency for clusters with 10-100 SNe. We argue that additional physical processes not included in our simulations - self-gravity, breakout from a galactic disc, and galactic shear - can slightly reduce the momentum enhancement from clustering, but the average momentum per SN still remains a factor of 4 larger than the isolated SN value when averaged over a realistic cluster mass function for a star-forming galaxy. We conclude with a discussion of the possible role of mixing between hot and cold gas, induced by multidimensional instabilities or pre-existing density variations, as a limiting factor in the build-up of momentum by clustered SNe, and suggest future numerical experiments to explore these effects.
Giacomin, Valeria
This dissertation examines the case of the palm oil cluster in Malaysia and Indonesia, today one of the largest agricultural clusters in the world. My analysis focuses on the evolution of the cluster from the 1880s to the 1970s in order to understand how it helped these two countries to integrate......-researched topic in the cluster literature – the emergence of clusters, their governance and institutional change, and competition between rival cluster locations – through the case of the Southeast Asian palm oil cluster....
Giacomin, Valeria
This dissertation examines the case of the palm oil cluster in Malaysia and Indonesia, today one of the largest agricultural clusters in the world. My analysis focuses on the evolution of the cluster from the 1880s to the 1970s in order to understand how it helped these two countries to integrate......-researched topic in the cluster literature – the emergence of clusters, their governance and institutional change, and competition between rival cluster locations – through the case of the Southeast Asian palm oil cluster....
2015-02-01
In a similar manner, globalization has also created new realities, such as in the case of food production where choice now affects demand as much as...quantity did in the past. “Two major factors drive food requirements [and market prices]: a growing global population and prosperity that expands...argued earlier, to expend effort in other nations without consideration of building capacity and resiliency risks strategic failure and wastage of
Markets and Institutional Capacity
Ingemann, Jan Holm
2010-01-01
Adequate explanations concerning the introduction of production and consumption of organic food in Denmark imply the necessity to engage a certain understanding of markets. Markets should subsequently not be seen as entities nor places but as complex relations between human actors. Further, the e......, the establishment, maintenance and development of markets are depending on the capacity of the actors to enter into continuous and enhancing interplay....
Clustering in analytical chemistry.
Drab, Klaudia; Daszykowski, Michal
2014-01-01
Data clustering plays an important role in the exploratory analysis of analytical data, and the use of clustering methods has been acknowledged in different fields of science. In this paper, principles of data clustering are presented with a direct focus on clustering of analytical data. The role of the clustering process in the analytical workflow is underlined, and its potential impact on the analytical workflow is emphasized.
Capacity factors of a mixed speed railway network
Harrod, Steven
2009-01-01
Fifty-four combinations of track network and speed differential are evaluated within a linear, discrete time network model that maximizes an objective function of train volume, delays, and idle train time. The results contradict accepted dispatching practice by suggesting that when introducing...... a priority, high-speed train onto a network, maximum network now is attained when the priority train operates at maximum speed. in addition, increasing siding capacity at meeting points may offer a network capacity improvement comparable to partial double track. (C) 2009 Elsevier Ltd. All rights reserved....
Competence building capacity shortage
Doorman, Gerard; Wangensteen, Ivar; Bakken, Bjoern
2005-02-01
The objective of the project 'Competence Building Capacity Shortage' has been 'to increase knowledge about central approaches aimed at solving the peaking capacity problem in restructured power systems'. With respect to reserve markets, a model was developed in the project to analyze the relations between reserve requirements and prices in the spot and reserve markets respectively. A mathematical model was also developed and implemented, which also includes the balance market, and has a good ability to predict the relations between these markets under various assumptions. With some further development, this model can be used fore realistic analyses of these markets in a Nordic context. It was also concluded that certain system requirements with respect to frequency and time deviation can be relaxed without adverse effects. However, the requirements to system bias, Frequency Activated Operating Reserves and Frequency Activated Contingency Reserves cannot be relaxed, the latter because they must cover the dimensioning fault in the system. On the other hand, Fast Contingency Reserves can be reduced by removing requirements to national balances. Costs can furthermore be reduced by increasingly adapting a Nordic as opposed to national approach. A model for stepwise power flow was developed in the project, which is especially useful to analyze slow power system dynamics. This is relevant when analysing the effects of reserve requirements. A model for the analysis of the capacity balance in Norway and Sweden was also developed. This model is useful for looking at the future balance under various assumptions regarding e.g. weather conditions, demand growth and the development of the generation system. With respect to the present situation, if there is some price flexibility on the demand side and system operators are able to use reserves from the demand side, the probability for load shedding during the peak load hour is close to zero under the weather
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Cluster formation probability in the trans-tin and trans-lead nuclei
Santhosh, K.P. [School of Pure and Applied Physics, Kannur University, Payyanur Campus, Payyanur 670 327 (India)], E-mail: drkpsanthosh@gmail.com; Biju, R.K.; Sahadevan, Sabina [P.G. Department of Physics and Research Centre, Payyanur College, Payyanur 670 327 (India)
2010-07-01
Within our fission model, the Coulomb and proximity potential model (CPPM) cluster formation probabilities are calculated for different clusters ranging from carbon to silicon for the parents in the trans-tin and trans-lead regions. It is found that in trans-tin region the {sup 12}C, {sup 16}O, {sup 20}Ne and {sup 24}Mg clusters have maximum cluster formation probability and lowest half lives as compared to other clusters. In trans-lead region the {sup 14}C, {sup 18,20}O, {sup 23}F, {sup 24,26}Ne, {sup 28,30}Mg and {sup 34}Si clusters have the maximum cluster formation probability and minimum half life, which show that alpha like clusters are most probable for emission from trans-tin region while non-alpha clusters are probable from trans-lead region. These results stress the role of neutron proton symmetry and asymmetry of daughter nuclei in these two cases.
Cluster formation probability in the trans-tin and trans-lead nuclei
Santhosh, K P; Sahadevan, Sabina; 10.1016/j.nuclphysa.2010.03.004
2010-01-01
Within our fission model, the Coulomb and proximity potential model (CPPM) cluster formation probabilities are calculated for different clusters ranging from carbon to silicon for the parents in the trans-tin and trans- lead regions. It is found that in trans-tin region the 12^C, 16^O, 20^Ne and 24^Mg clusters have maximum cluster formation probability and lowest half lives as compared to other clusters. In trans-lead region the 14^C, 18, 20^O, 23^F, 24,26^Ne, 28,30^Mg and 34^Si clusters have the maximum cluster formation probability and minimum half life, which show that alpha like clusters are most probable for emission from trans-tin region while non-alpha clusters are probable from trans-lead region. These results stress the role of neutron proton symmetry and asymmetry of daughter nuclei in these two cases.
Rainfall Maximum Intensities for Urban Hydrological Design in Mexican Republic
Campos–Aranda D.F.
2010-04-01
Full Text Available Firstly, through the urban hydrosystem concept and through urbanization, the difficulties and approach of the urban flood estimation are established, based in the Intensity–Duration–Frequency curves (IDF. Next, in 10 recording gages located in very different geographic zones, a procedure is contrasted for IDF estimation curves, which utilized the Chen formula and the available information in the Mexican Republic for isohyet intensities and annual daily maximum rainfall. Late, having verified their capacity and approximation to reproduce the IDF curves, the utilized procedure was applied in 45 important locations of the country, showing the results. Lastly, the conclusions are formulated, which point out the approximation and simplicity of the proposal procedure.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
The capacity of the Hopfield associative memory
Mceliece, Robert J.; Posner, Edward C.; Rodemich, Eugene R.; Venkatesh, Santosh S.
1987-01-01
Techniques from coding theory are applied to study rigorously the capacity of the Hopfield associative memory. Such a memory stores n-tuple of + or - 1s. The components change depending on a hard-limited version of linear functions of all other components. With symmetric connections between components, a stable state is ultimately reached. By building up the connection matrix as a sum-of-outer products of m fundamental memories, it may be possible to recover a certain one of the m memories by using an initial n-tuple probe vector less than a Hamming distance n/2 away from the fundamental memory. If m fundamental memories are chosen at random, the maximum asymptotic value of m in order that most of the m original memories are exactly recoverable is n/(2 log n). With the added restriction that every one of the m fundamental memories be recoverable exactly, m can be no more than n/(4 log n) asymptotically as n approaches infinity. Extensions are also considered, in particular to capacity under quantization of the outer-product connection matrix. This quantized memory-capacity problem is closely related to the capacity of the quantized Gaussian channel.
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Savulescu, Julian; Kahane, Guy
2011-01-01
Enhancing Human Capacities is the first to review the very latest scientific developments in human enhancement. It is unique in its examination of the ethical and policy implications of these technologies from a broad range of perspectives. Presents a rich range of perspectives on enhancement from world leading ethicists and scientists from Europe and North America The most comprehensive volume yet on the science and ethics of human enhancement Unique in providing a detailed overview of current and expected scientific advances in this area Discusses both general conceptual and ethical issues
Laser-induced reconstruction of Ag clusters in helium droplets
Gomez, Luis F.; O'Connell, Sean M. O.; Jones, Curtis F.; Kwok, Justin; Vilesov, Andrey F.
2016-09-01
Silver clusters were assembled in helium droplets of different sizes ranging from 105 to 1010 atoms. The absorption of the clusters was studied upon laser irradiation at 355 nm and 532 nm, which is close to the plasmon resonance maximum in spherical Ag clusters and in the range of the absorption of the complex, branched Ag clusters, respectively. The absorption of the pulsed (7 ns) radiation at 532 nm shows some pronounced saturation effects, absent upon the continuous irradiation. This phenomenon has been discussed in terms of the melting of the complex Ag clusters at high laser fluence, resulting in a loss of the 532 nm absorption. Estimates of the heat transfer also indicate that a bubble may be formed around the hot cluster at high fluences, which may result in ejection of the cluster from the droplet, or disintegration of the droplet entirely.
Pratas, Nuno; Marchetti, Nicola; Rodrigues, Antonio
2010-01-01
scenarios encompassing different degree of environment correlation between the cluster nodes, number of cluster nodes and sensed channel occupation statistics. Through this study we motivate that to maximize the perceived capacity by the cooperative spectrum sensing, the use of data fusion needs...
Østergaard, Christian Richter; Park, Eun Kyung
2015-01-01
Most studies on regional clusters focus on identifying factors and processes that make clusters grow. However, sometimes technologies and market conditions suddenly shift, and clusters decline. This paper analyses the process of decline of the wireless communication cluster in Denmark....... The longitudinal study on the high-tech cluster reveals that technological lock-in and exit of key firms have contributed to decline. Entrepreneurship has a positive effect on the cluster’s adaptive capabilities, while multinational companies have contradicting effects by bringing in new resources to the cluster...
Defining clusters in APT reconstructions of ODS steels
Williams, Ceri A., E-mail: ceri.williams@materials.ox.ac.uk [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Haley, Daniel [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Max-Planck-Institut für Eisenforschung GmbH, Max-Planck-Straße 1, D-40237 Düsseldorf (Germany); Marquis, Emmanuelle A. [Department of Materials Science and Engineering, University of Michigan, 2300 Hayward Street, Ann Arbor, MI 48109-2136 (United States); Smith, George D.W.; Moody, Michael P. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom)
2013-09-15
Oxide nanoclusters in a consolidated Fe–14Cr–2W–0.3Ti–0.3Y{sub 2}O{sub 3} ODS steel and in the alloy powder after mechanical alloying (but before consolidation) are investigated by atom probe tomography (APT). The maximum separation method is a standard method to define and characterise clusters from within APT data, but this work shows that the extent of clustering between the two materials is sufficiently different that the nanoclusters in the mechanically alloyed powder and in the consolidated material cannot be compared directly using the same cluster selection parameters. As the cluster selection parameters influence the size and composition of the clusters significantly, a procedure to optimise the input parameters for the maximum separation method is proposed by sweeping the d{sub max} and N{sub min} parameter space. By applying this method of cluster parameter selection combined with a ‘matrix correction’ to account for trajectory aberrations, differences in the oxide nanoclusters can then be reliably quantified. - Highlights: ► Oxide nanoclusters in an ODS steel are defined using the double maximum separation method. ► Clusters in ODS material at different stages during processing cannot be compared directly. ► Input parameters for the maximum separation method are optimised by an objective function. ► When combined with a ‘matrix correction’, variation in the nanoclusters can be quantified.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Molecular dynamical simulations of melting behaviors of metal clusters
Ilyar Hamid
2015-04-01
Full Text Available The melting behaviors of metal clusters are studied in a wide range by molecular dynamics simulations. The calculated results show that there are fluctuations in the heat capacity curves of some metal clusters due to the strong structural competition; For the 13-, 55- and 147-atom clusters, variations of the melting points with atomic number are almost the same; It is found that for different metal clusters the dynamical stabilities of the octahedral structures can be inferred in general by a criterion proposed earlier by F. Baletto et al. [J. Chem. Phys. 116 3856 (2002] for the statically stable structures.
Deficiency of employability capacity
Pelse I.
2012-10-01
Full Text Available Young unemployed people have comprised one of the significantly largest groups of the unemployed people in Latvia in recent years. One of the reasons why young people have difficulty integrating into the labour market is the “expectation gap” that exists in the relations between employers and the new generation of workers. Employers focus on capacity-building for employability such individual factors as strength, patience, self-discipline, self-reliance, self-motivation, etc., which having a nature of habit and are developed in a long-term work socialization process, which begins even before the formal education and will continue throughout the life cycle. However, when the socialization is lost, these habits are depreciated faster than they can be restored. Currently a new generation is entering the labour market, which is missing the succession of work socialization. Factors, such as rising unemployment and poverty in the background over the past twenty years in Latvia have created a very unfavourable employability background of “personal circumstances” and “external factors”, which seriously have impaired formation of the skills and attitudes in a real work environment. The study reveals another paradox – the paradox of poverty. Common sense would want to argue that poverty can be overcome by the job. However, the real state of affairs shows that unfavourable coincidence of the individual, personal circumstances and external factors leads to deficit of employability capacity and possibility of marked social and employment deprivation.
Intra-articular capacity of the elbow joint.
Van Den Broek, Mathias; Van Riet, Roger
2017-09-01
The intra-articular capacity of the elbow joint is reported to be 23 ± 4 ml on cadaveric elbows. During years, this value was the standard. The aim of this observational study was to reanalyze the volume of the elbow joint on live patients. Measurement of the intra-articular capacity and pressure of the elbow joint was performed on 30 patients (mean age: 43.8 years) undergoing elbow arthroscopy. Intra-articular capacity was recorded when the elbow moved to the maximum lose packed position and/or when there was a sudden drop in pressure, indicating a capsular rupture (maximum capacity). Indications for arthroscopy were loose bodies, osteoarthritis, synovitis, radial head resection, and lateral collateral ligament repair. Mean intra-articular capacity and pressure were 35.8 ml and 557.5 mm Hg, respectively. Mean maximal capacity was 40.5 ml. We conclude that the intra-articular capacity of the elbow joint is substantially greater than reported in previous studies. Clin. Anat. 30:795-798, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Comprehensive cluster analysis with Transitivity Clustering.
Wittkop, Tobias; Emig, Dorothea; Truss, Anke; Albrecht, Mario; Böcker, Sebastian; Baumbach, Jan
2011-03-01
Transitivity Clustering is a method for the partitioning of biological data into groups of similar objects, such as genes, for instance. It provides integrated access to various functions addressing each step of a typical cluster analysis. To facilitate this, Transitivity Clustering is accessible online and offers three user-friendly interfaces: a powerful stand-alone version, a web interface, and a collection of Cytoscape plug-ins. In this paper, we describe three major workflows: (i) protein (super)family detection with Cytoscape, (ii) protein homology detection with incomplete gold standards and (iii) clustering of gene expression data. This protocol guides the user through the most important features of Transitivity Clustering and takes ∼1 h to complete.
The little-studied cluster Berkeley 90 - III. Cluster parameters
Marco, Amparo; Negueruela, Ignacio
2017-02-01
The open cluster Berkeley 90 is the home to one of the most massive binary systems in the Galaxy, LS III +46°11, formed by two identical, very massive stars (O3.5 If* + O3.5 If*), and a second early-O system (LS III +46°12 with an O4.5 IV((f)) component at least). Stars with spectral types earlier than O4 are very scarce in the Milky Way, with no more than 20 examples. The formation of such massive stars is still an open question today, and thus the study of the environments where the most massive stars are found can shed some light on this topic. To this aim, we determine the properties and characterize the population of Berkeley 90 using optical, near-infrared and WISE photometry and optical spectroscopy. This is the first determination of these parameters with accuracy. We find a distance of 3.5^{+0.5}_{-0.5} kpc and a maximum age of 3 Ma. The cluster mass is around 1000 M⊙ (perhaps reaching 1500 M⊙ if the surrounding population is added), and we do not detect candidate runaway stars in the area. There is a second population of young stars to the southeast of the cluster that may have formed at the same time or slightly later, with some evidence for low-activity ongoing star formation.
Violating the Shannon capacity of metric graphs with entanglement
Briët, Jop; Buhrman, Harry; Gijswijt, Dion
2013-01-01
The Shannon capacity of a graph G is the maximum asymptotic rate at which messages can be sent with zero probability of error through a noisy channel with confusability graph G. This extensively studied graph parameter disregards the fact that on atomic scales, nature behaves in line with quantum mechanics. Entanglement, arguably the most counterintuitive feature of the theory, turns out to be a useful resource for communication across noisy channels. Recently [Leung D, Mančinska L, Matthews W, Ozols M, Roy A (2012) Commun Math Phys 311:97–111], two examples of graphs were presented whose Shannon capacity is strictly less than the capacity attainable if the sender and receiver have entangled quantum systems. Here, we give natural, possibly infinite, families of graphs for which the entanglement-assisted capacity exceeds the Shannon capacity. PMID:23267109
THE TRANSMISSION CAPACITY OF MANET BASED ON CONFLICT GRAPH
无
2007-01-01
The transmission capacity of Mobile Ad Hoc Networking (MANET) is constrained by the mutual interference of concurrent transmissions between nodes. First, the transmission capacity of MANET is studied by the view of information flow between nodes. At the same time, the problem that the interference between nodes affects the transmission capacity of MANET is also studied by the tool of the event conflict graph. Secondly, the paper presents the method to compute the maximum expectant achievable capacity for the given conflict graph, and concludes and proves an sufficient condition that the information flow transmit successfully between nodes. At last, the results are simulated and a fitting equation of transmission capacity between nodes is given.
Effective capacity of multiple antenna channels: Correlation and keyhole
Zhong, Caijun
2012-01-01
In this study, the authors derive the effective capacity limits for multiple antenna channels which quantify the maximum achievable rate with consideration of link-layer delay-bound violation probability. Both correlated multiple-input single-output and multiple-input multiple-output keyhole channels are studied. Based on the closed-form exact expressions for the effective capacity of both channels, the authors look into the asymptotic high and low signal-to-noise ratio regimes, and derive simple expressions to gain more insights. The impact of spatial correlation on effective capacity is also characterised with the aid of a majorisation theory result. It is revealed that antenna correlation reduces the effective capacity of the channels and a stringent quality-of-service requirement causes a severe reduction in the effective capacity but can be alleviated by increasing the number of antennas. © 2012 The Institution of Engineering and Technology.
Measurement and relevance of maximum metabolic rate in fishes.
Norin, T; Clark, T D
2016-01-01
Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (aerobic scope, interest in measuring this trait has spread across disciplines in attempts to predict effects of climate change on fish populations. Here, various techniques used to elicit and measure MMR in different fish species with contrasting lifestyles are outlined and the relevance of MMR to the ecology, fitness and climate change resilience of fishes is discussed.
Niching method using clustering crowding
GUO Guan-qi; GUI Wei-hua; WU Min; YU Shou-yi
2005-01-01
This study analyzes drift phenomena of deterministic crowding and probabilistic crowding by using equivalence class model and expectation proportion equations. It is proved that the replacement errors of deterministic crowding cause the population converging to a single individual, thus resulting in premature stagnation or losing optional optima. And probabilistic crowding can maintain equilibrium multiple subpopulations as the population size is adequate large. An improved niching method using clustering crowding is proposed. By analyzing topology of fitness landscape using hill valley function and extending the search space for similarity analysis, clustering crowding determines the locality of search space more accurately, thus greatly decreasing replacement errors of crowding. The integration of deterministic and probabilistic replacement increases the capacity of both parallel local hill climbing and maintaining multiple subpopulations. The experimental results optimizing various multimodal functions show that,the performances of clustering crowding, such as the number of effective peaks maintained, average peak ratio and global optimum ratio are uniformly superior to those of the evolutionary algorithms using fitness sharing, simple deterministic crowding and probabilistic crowding.
Workshop on moisture buffer capacity
2003-01-01
Summary report of a Nordtest workshop on moisture buffer capacity held at Copenhagen August 21-22 2003......Summary report of a Nordtest workshop on moisture buffer capacity held at Copenhagen August 21-22 2003...
无
2007-01-01
@@ Paraformaldehyde (PF) production in China has grown to a considerable scale today. The total capacity was around 90 thousand t/a in 2006. Since 2007, the production capacity of PF has increased drastically.
The Cluster Substructure - Alignment Connection
Plionis, Manolis
2001-01-01
Using the APM cluster data we investigate whether the dynamical status of clusters is related to the large-scale structure of the Universe. We find that cluster substructure is strongly correlated with the tendency of clusters to be aligned with their nearest neighbour and in general with the nearby clusters that belong to the same supercluster. Furthermore, dynamically young clusters are more clustered than the overall cluster population. These are strong indications that cluster develop in ...
Nuclear Clusters in Astrophysics
Kubono, S.; Binh, Dam N.; Hayakawa, S.; Hashimoto, H.; Kahl, D.; Wakabayashi, Y.; Yamaguchi, H. [Center for Nuclear Study (CNS), University of Tokyo, Wako Branch at RIKEN 2-1 Hirosawa, Wako, Saitama, 351-0198 (Japan); Teranishi, T. [Department of Physics, Kyushu University, Fukuoka, 812-8581 (Japan); Iwasa, N. [Department of Physics, Tohoku University, Sendai, 980-8578 (Japan); Komatsubara, T. [Department of Physics, Tsukuba University, Ibaraki, 305-8571 (Japan); Kato, S. [Department of Physics, Yamagata University, Yamagata, 990-8560 (Japan); Khiem, Le H. [Institute of Physics, Vietnam Academy for Science and Technology, Hanoi (Viet Nam)
2010-03-01
The role of nuclear clustering is discussed for nucleosynthesis in stellar evolution with Cluster Nucleosynthesis Diagram (CND) proposed before. Special emphasis is placed on alpha-induced stellar reactions together with molecular states for O and C burning.
Stochastic self-assembly of incommensurate clusters
DÓ Rsogna, Maria; Lakatos, Greg; Chou, Tom
2013-03-01
We examine the classic problem of homogeneous nucleation and self-assembly by deriving and analyzing a fully discrete stochastic master equation. We enumerate the highest probability steady-states, and derive exact analytical formulae for quenched and equilibrium mean cluster size distributions. Upon comparison with results obtained from the associated the mass-action Becker-Döring (BD) equations, we find striking differences between the two corresponding equilibrium mean cluster concentrations. These differences depend primarily on the divisibility of the total available mass by the maximum allowed cluster size, and the remainder. When such mass ``incommensurability'' arises, a single remainder particle can ``emulsify'' the system by significantly broadening the equilibrium mean cluster size distribution. This discreteness-induced broadening effect is periodic in the total mass of the system but arises even when the system size is asymptotically large, provided the ratio of the total mass to the maximum cluster size is finite. Our findings define a new scaling regime in which results from classic mass-action theories are qualitatively inaccurate, even in the limit of large total system size. This work supported by NSF DMS-1021818 and DMS-1021850
[Pathophysiology of cluster headache].
Donnet, Anne
2015-11-01
The aetiology of cluster headache is partially unknown. Three areas are involved in the pathogenesis of cluster headache: the trigeminal nociceptive pathways, the autonomic system and the hypothalamus. The cluster headache attack involves activation of the trigeminal autonomic reflex. A dysfunction located in posterior hypothalamic gray matter is probably pivotal in the process. There is a probable association between smoke exposure, a possible genetic predisposition and the development of cluster headache.
Landfill Construction and Capacity Expansion
Andre, F.J.; Cerda, E.
2003-01-01
We study the optimal capacity and lifetime of landfills taking into account their sequential nature.Such an optimal capacity is characterized by the so-called Optimal Capacity Condition.Particular versions of this condition are obtained for two alternative settings: first, if all the landfills are t
STAR FORMATION IN DENSE CLUSTERS
Myers, Philip C., E-mail: pmyers@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
2011-12-10
A model of core-clump accretion with equally likely stopping describes star formation in the dense parts of clusters, where models of isolated collapsing cores may not apply. Each core accretes at a constant rate onto its protostar, while the surrounding clump gas accretes as a power of protostar mass. Short accretion flows resemble Shu accretion and make low-mass stars. Long flows resemble reduced Bondi accretion and make massive stars. Accretion stops due to environmental processes of dynamical ejection, gravitational competition, and gas dispersal by stellar feedback, independent of initial core structure. The model matches the field star initial mass function (IMF) from 0.01 to more than 10 solar masses. The core accretion rate and the mean accretion duration set the peak of the IMF, independent of the local Jeans mass. Massive protostars require the longest accretion durations, up to 0.5 Myr. The maximum protostar luminosity in a cluster indicates the mass and age of its oldest protostar. The distribution of protostar luminosities matches those in active star-forming regions if protostars have a constant birthrate but not if their births are coeval. For constant birthrate, the ratio of young stellar objects to protostars indicates the star-forming age of a cluster, typically {approx}1 Myr. The protostar accretion luminosity is typically less than its steady spherical value by a factor of {approx}2, consistent with models of episodic disk accretion.
Hot Outflows in Galaxy Clusters
Kirkpatrick, C C
2015-01-01
The gas-phase metallicity distribution has been analyzed for the hot atmospheres of 29 galaxy clusters using {\\it Chandra X-ray Observatory} observations. All host brightest cluster galaxies (BCGs) with X-ray cavity systems produced by radio AGN. We find high elemental abundances projected preferentially along the cavities of 16 clusters. The metal-rich plasma was apparently lifted out of the BCGs with the rising X-ray cavities (bubbles) to altitudes between twenty and several hundred kiloparsecs. A relationship between the maximum projected altitude of the uplifted gas (the "iron radius") and jet power is found with the form $R_{\\rm Fe} \\propto P_{\\rm jet}^{0.45}$. The estimated outflow rates are typically tens of solar masses per year but exceed $100 ~\\rm M_\\odot ~yr^{-1}$ in the most powerful AGN. The outflow rates are 10% to 20% of the cooling rates, and thus alone are unable to offset a cooling inflow. Nevertheless, hot outflows effectively redistribute the cooling gas and may play a significant role at ...
Cluster Physics with Merging Galaxy Clusters
Sandor M. Molnar
2016-02-01
Full Text Available Collisions between galaxy clusters provide a unique opportunity to study matter in a parameter space which cannot be explored in our laboratories on Earth. In the standard LCDM model, where the total density is dominated by the cosmological constant ($Lambda$ and the matter density by cold dark matter (CDM, structure formation is hierarchical, and clusters grow mostly by merging.Mergers of two massive clusters are the most energetic events in the universe after the Big Bang,hence they provide a unique laboratory to study cluster physics.The two main mass components in clusters behave differently during collisions:the dark matter is nearly collisionless, responding only to gravity, while the gas is subject to pressure forces and dissipation, and shocks and turbulenceare developed during collisions. In the present contribution we review the different methods used to derive the physical properties of merging clusters. Different physical processes leave their signatures on different wavelengths, thusour review is based on a multifrequency analysis. In principle, the best way to analyze multifrequency observations of merging clustersis to model them using N-body/HYDRO numerical simulations. We discuss the results of such detailed analyses.New high spatial and spectral resolution ground and space based telescopeswill come online in the near future. Motivated by these new opportunities,we briefly discuss methods which will be feasible in the near future in studying merging clusters.
Lorentzen, Jochen; Robbins, Glen; Barnes, Justin
2004-01-01
The paper describes the formation of the Durban Auto Cluster in the context of trade liberalization. It argues that the improvement of operational competitiveness of firms in the cluster is prominently due to joint action. It tests this proposition by comparing the gains from cluster activities i...
Lorentzen, Jochen; Robbins, Glen; Barnes, Justin
2004-01-01
The paper describes the formation of the Durban Auto Cluster in the context of trade liberalization. It argues that the improvement of operational competitiveness of firms in the cluster is prominently due to joint action. It tests this proposition by comparing the gains from cluster activities i...
Marketing research cluster analysis
Marić Nebojša
2002-01-01
Full Text Available One area of applications of cluster analysis in marketing is identification of groups of cities and towns with similar demographic profiles. This paper considers main aspects of cluster analysis by an example of clustering 12 cities with the use of Minitab software.
Cluster Correspondence Analysis
M. van de Velden (Michel); A. Iodice D' Enza; F. Palumbo
2014-01-01
markdownabstract__Abstract__ A new method is proposed that combines dimension reduction and cluster analysis for categorical data. A least-squares objective function is formulated that approximates the cluster by variables cross-tabulation. Individual observations are assigned to clusters
Phosphorus retention capacity of sediments in Mandovi estuary (Goa)
Rajagopal, M.D.; Reddy, C.V.G.
Experiments carried out under controlled conditions to study P retention capacity of sediments indicate that the processes of adsorption and desorption of P are pH dependent. Adsorption of P is maximum (58-99%) at pH 4. Both the exchangeable P...
Violating the Shannon capacity of metric graphs with entanglement
J. Briët (Jop); H. Buhrman (Harry); D. Gijswijt (Dion)
2012-01-01
htmlabstractThe Shannon capacity of a graph G is the maximum asymptotic rate at which messages can be sent with zero probability of error through a noisy channel with confusability graph G. This extensively studied graph parameter disregards the fact that on atomic scales, nature behaves in line
Maximum holding endurance time: Effects of load and load's center of gravity height.
Lee, Tzu-Hsien
2015-01-01
Manual holding task is a potential risk to the development of musculoskeletal injuries since it is prone to induce localized muscle fatigue. Maximum holding endurance time is a significant parameter for the design of manual holding task. This study aimed to examine the effects of load and load's COG height on maximum holding endurance time. Fifteen young and healthy males were recruited as participants. A factorial design was used to examine the effects of load and load's COG height on maximum holding endurance time. Four levels of load (15% , 30% , 45% and 60% of the participant's maximum holding capacity) and two levels of load's COG height in box (0 cm and 40 cm high from the handle position) were examined. Maximum holding endurance time decreased with increasing load and/or increasing load's COG height. The effect of load's COG height on maximum holding endurance time decreased with increasing load. Load, load's COG height, and the interaction of load and load's COG height significantly affected maximum holding endurance time. Practitioners should realize the effects of load, load's COG height, and the interaction of load and load's COG height on maximum holding endurance time when setting the working conditions of holding tasks.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
BioCluster:Tool for Identification and Clustering of Enterobacteriaceae Based on Biochemical Data
Ahmed Abdullah; S.M.Sabbir Alam; Munawar Sultana; M.Anwar Hossain
2015-01-01
Presumptive identification of different Enterobacteriaceae species is routinely achieved based on biochemical properties. Traditional practice includes manual comparison of each biochem-ical property of the unknown sample with known reference samples and inference of its identity based on the maximum similarity pattern with the known samples. This process is labor-intensive, time-consuming, error-prone, and subjective. Therefore, automation of sorting and sim-ilarity in calculation would be advantageous. Here we present a MATLAB-based graphical user interface (GUI) tool named BioCluster. This tool was designed for automated clustering and iden-tification of Enterobacteriaceae based on biochemical test results. In this tool, we used two types of algorithms, i.e., traditional hierarchical clustering (HC) and the Improved Hierarchical Clustering (IHC), a modified algorithm that was developed specifically for the clustering and identification of Enterobacteriaceae species. IHC takes into account the variability in result of 1–47 biochemical tests within this Enterobacteriaceae family. This tool also provides different options to optimize the clus-tering in a user-friendly way. Using computer-generated synthetic data and some real data, we have demonstrated that BioCluster has high accuracy in clustering and identifying enterobacterial species based on biochemical test data. This tool can be freely downloaded at http://microbialgen.du.ac.bd/biocluster/.
Capacity Utilization in European Railways
Khadem Sameni, Melody; Landex, Alex
2013-01-01
At the strategic level, railways currently use different indices to estimate how ‘value’ is generated by using railway capacity. However, railway capacity is a multidisciplinary area, and attempts to develop various indices cannot provide a holistic measure of operational efficiency. European...... railways are facing a capacity challenge which is caused by passenger and freight demand exceeding the track capacity supply. In the absence of a comprehensive railway capacity manual, methodologies are needed to assess how well railways use their track capacity. This paper presents a novel...... and unprecedented approach for this aim. Relative operational efficiency of 24 European railways in capacity utilization is studied for the first time by data envelopment analysis (DEA). It deviates from previous applications of DEA in the railway industry that are conducted to analyze cost efficiency of railways...
Cluster analysis for applications
Anderberg, Michael R
1973-01-01
Cluster Analysis for Applications deals with methods and various applications of cluster analysis. Topics covered range from variables and scales to measures of association among variables and among data units. Conceptual problems in cluster analysis are discussed, along with hierarchical and non-hierarchical clustering methods. The necessary elements of data analysis, statistics, cluster analysis, and computer implementation are integrated vertically to cover the complete path from raw data to a finished analysis.Comprised of 10 chapters, this book begins with an introduction to the subject o
Abrahamsen, Mikkel; de Berg, Mark; Buchin, Kevin; Mehr, Mehran; Mehrabi, Ali D.
2017-01-01
In a geometric $k$-clustering problem the goal is to partition a set of points in $\\mathbb{R}^d$ into $k$ subsets such that a certain cost function of the clustering is minimized. We present data structures for orthogonal range-clustering queries on a point set $S$: given a query box $Q$ and an integer $k>2$, compute an optimal $k$-clustering for $S\\setminus Q$. We obtain the following results. We present a general method to compute a $(1+\\epsilon)$-approximation to a range-clustering query, ...
Cluster Decline and Resilience
Østergaard, Christian Richter; Park, Eun Kyung
-2011. Our longitudinal study reveals that technological lock-in and exit of key firms have contributed to impairment of the cluster’s resilience in adapting to disruptions. Entrepreneurship has a positive effect on cluster resilience, while multinational companies have contradicting effects by bringing......Most studies on regional clusters focus on identifying factors and processes that make clusters grow. However, sometimes technologies and market conditions suddenly shift, and clusters decline. This paper analyses the process of decline of the wireless communication cluster in Denmark, 1963...
Management of cluster headache
Tfelt-Hansen, Peer C; Jensen, Rigmor H
2012-01-01
and agitation. Patients may have up to eight attacks per day. Episodic cluster headache (ECH) occurs in clusters of weeks to months duration, whereas chronic cluster headache (CCH) attacks occur for more than 1 year without remissions. Management of cluster headache is divided into acute attack treatment....... In drug-resistant CCH, neuromodulation with either occipital nerve stimulation or deep brain stimulation of the hypothalamus is an alternative treatment strategy. For most cluster headache patients there are fairly good treatment options both for acute attacks and for prophylaxis. The big problem...
Artificial Neural Network In Maximum Power Point Tracking Algorithm Of Photovoltaic Systems
Modestas Pikutis
2014-05-01
Full Text Available Scientists are looking for ways to improve the efficiency of solar cells all the time. The efficiency of solar cells which are available to the general public is up to 20%. Part of the solar energy is unused and a capacity of solar power plant is significantly reduced – if slow controller or controller which cannot stay at maximum power point of solar modules is used. Various algorithms of maximum power point tracking were created, but mostly algorithms are slow or make mistakes. In the literature more and more oftenartificial neural networks (ANN in maximum power point tracking process are mentioned, in order to improve performance of the controller. Self-learner artificial neural network and IncCond algorithm were used for maximum power point tracking in created solar power plant model. The algorithm for control was created. Solar power plant model is implemented in Matlab/Simulink environment.
Deployment Strategies and Clustering Protocols Efficiency
Chérif Diallo
2017-06-01
Full Text Available Wireless sensor networks face significant design challenges due to limited computing and storage capacities and, most importantly, dependence on limited battery power. Energy is a critical resource and is often an important issue to the deployment of sensor applications that claim to be omnipresent in the world of future. Thus optimizing the deployment of sensors becomes a major constraint in the design and implementation of a WSN in order to ensure better network operations. In wireless networking, clustering techniques add scalability, reduce the computation complexity of routing protocols, allow data aggregation and then enhance the network performance. The well-known MaxMin clustering algorithm was previously generalized, corrected and validated. Then, in a previous work we have improved MaxMin by proposing a Single- node Cluster Reduction (SNCR mechanism which eliminates single-node clusters and then improve energy efficiency. In this paper, we show that MaxMin, because of its original pathological case, does not support the grid deployment topology, which is frequently used in WSN architectures. The unreliability feature of the wireless links could have negative impacts on Link Quality Indicator (LQI based clustering protocols. So, in the second part of this paper we show how our distributed Link Quality based d- Clustering Protocol (LQI-DCP has good performance in both stable and high unreliable link environments. Finally, performance evaluation results also show that LQI-DCP fully supports the grid deployment topology and is more energy efficient than MaxMin.
Fragmentation dynamics of ammonia cluster ions after single photon ionisation
Kaiser, E.; Vries, J. de; Steger, H.; Menzel, C.; Kamke, W.; Hertel, I.V. (Freiburg Univ. (Germany, F.R.). Fakultaet fuer Physik Freiburg Univ. (Germany, F.R.). Freiburger Materialforschungszentrum)
1991-01-01
A reflecting time of flight mass spectrometer (RETOF) is used to study unimolecular and collision induced fragmentation of ammonia cluster ions. Synchrotron radiation from the BESSY electron storage ring is used in a range of photon energies from 9.08 up to 17.7 eV for single photon ionisation of neutral clusters in a supersonic beam. The threshold photoelectron photoion coincidence technique (TPEPICO) is used to define the energy initially deposited into the cluster ions. Metastable unimolecular decay ({mu}s range) is studied using the RETOF's capacity for energy analysis. Under collision free conditions the by far most prominent metastable process is the evaporation of one neutral NH{sub 3} monomer from protonated clusters (NH{sub 3}){sub x}NH{sub 4}{sup +}. Abundance of homogeneous vs. protonated cluster ions and of metastable fragments are reported as a function of photon energy and cluster size up to n=10. (orig.).
Following the pioneering discovery of alpha clustering and of molecular resonances, the field of nuclear clustering is today one of those domains of heavy-ion nuclear physics that faces the greatest challenges, yet also contains the greatest opportunities. After many summer schools and workshops, in particular over the last decade, the community of nuclear molecular physicists has decided to collaborate in producing a comprehensive collection of lectures and tutorial reviews covering the field. This third volume follows the successful Lect. Notes Phys. 818 (Vol. 1) and 848 (Vol. 2), and comprises six extensive lectures covering the following topics: - Gamma Rays and Molecular Structure - Faddeev Equation Approach for Three Cluster Nuclear Reactions - Tomography of the Cluster Structure of Light Nuclei Via Relativistic Dissociation - Clustering Effects Within the Dinuclear Model : From Light to Hyper-heavy Molecules in Dynamical Mean-field Approach - Clusterization in Ternary Fission - Clusters in Light N...
Lawson, Andrew B
2002-01-01
Research has generated a number of advances in methods for spatial cluster modelling in recent years, particularly in the area of Bayesian cluster modelling. Along with these advances has come an explosion of interest in the potential applications of this work, especially in epidemiology and genome research. In one integrated volume, this book reviews the state-of-the-art in spatial clustering and spatial cluster modelling, bringing together research and applications previously scattered throughout the literature. It begins with an overview of the field, then presents a series of chapters that illuminate the nature and purpose of cluster modelling within different application areas, including astrophysics, epidemiology, ecology, and imaging. The focus then shifts to methods, with discussions on point and object process modelling, perfect sampling of cluster processes, partitioning in space and space-time, spatial and spatio-temporal process modelling, nonparametric methods for clustering, and spatio-temporal ...
Unconventional methods for clustering
Kotyrba, Martin
2016-06-01
Cluster analysis or clustering is a task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is the main task of exploratory data mining and a common technique for statistical data analysis used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, and bioinformatics. The topic of this paper is one of the modern methods of clustering namely SOM (Self Organising Map). The paper describes the theory needed to understand the principle of clustering and descriptions of algorithm used with clustering in our experiments.
Gabriel Gulis PhD
2007-09-01
Full Text Available
Background: To integrate health impact assessment (HIA into existing decision-making processes requires not only methods and procedures but also well-trained experts, aware policy makers and appropriate institutions. Capacity building is the assistance which is provided to entities, which have a need to develop a certain skill or competence, or for general upgrading of performance ability. If a new technique is planned to be introduced there is a need for capacity building with no respect to levels (local, regional, national, international or sectors (health, environment, finance, social care, education, etc.. As such, HIA is a new technique for most of the new Member States and accession countries of the European Union.
Methods: To equip individuals with the understanding and skills needed to launch a HIA or be aware of the availability of this methodology and to access information, knowledge and training, we focused on the organization of workshops in participating countries. The workshops served also as pilot events to test a “curriculum” for HIA; a set of basic topics and presentations had been developed to be tested during workshops. In spite of classical in-class workshops we aimed to organize e-learning events as a way to over come the “busyness” problem of decision makers.
Results: Throughout March – October 2006 we organized and ran 7 workshops in Denmark, Turkey, Lithuania, Poland, Bulgaria, Slovak Republic and Hungary. Participants came from the public health sector (141, non-public health decision makers (113 and public health students (100. A concise curriculum was developed and tested during these workshops. Participants developed a basic understanding of HIA, skills to develop and use their own screening tools as well as scoping.Within the workshop in Denmark we tested an online, real-time Internet based training method; participants highly welcomed this
Ahn, Chul; Hu, Fan; Skinner, Celette Sugg; Ahn, Daniel
2009-07-01
In some cluster randomization trials, the number of clusters cannot exceed a specified maximum value due to cost constraints or other practical reasons. Donner and Klar [Donner A, and Klar N. Design and analysis of cluster randomization trials in health research. Oxford University Press 2000] provided the sample size formula for the number of subjects required per cluster when the number of clusters cannot exceed a specified maximum value. The sample size formula of Donner and Klar assumes that the number of subjects is the same in each cluster. In practical situations, the number of subjects may be different among clusters. We conducted simulation studies to investigate the effect of the cluster size variability (kappa) and the intracluster correlation coefficient (rho) on the power of the study in which the number of available clusters is fixed in advance. For the balanced case (kappa=1.0), i.e., equal cluster size among clusters, the sample size formula yielded empirical powers close to the nominal level even when the number of available clusters per group (k*) is as small as 10. The sample size formula yielded empirical powers close to the nominal level when the number of available clusters per group (k*) is at least 20 and the imbalance parameter (kappa) is at least 0.8. Empirical powers were close to the nominal level when (rho or =0.8, and k*=10) or (rho< or =0.02, kappa=0.8, and k*=20).
Collaborative mission planning for UAV cluster to optimize relay distance
Tanil, Cagatay; Warty, Chirag; Obiedat, Esam
Unmanned Aerial Vehicles (UAVs) coordinated path planning and intercommunication for visual exploration of a geographical region has recently become crucial. Multiple UAVs cover larger area than a single UAV and eliminate blind spots. To improve the surveillance, survivability and quality of the communication, we propose two algorithms for the route planning of UAV cluster operated in obstacle rich environment: (i) Multiple Population Genetic Algorithm (MPGA) (ii) Relay Selection Criteria (RSC). The main objective of MPGA is to minimize the total mission time while maintaining an optimal distance for communication between the neighboring nodes. MPGA utilizes evolutionary speciation techniques with a novel Feasible Population Creation Method (FPCM) and enhanced Inter-species Crossover Mechanism (ISCM) to obtain diversified routes in remarkably short time. In obtaining collision-free optimum paths, UAVs are subjected to constraints such as limited communication range, maximum maneuverability and fuel capacity. In addition to the path planning, RSC is developed for selection of UAVs relay nodes that is based on the location of the relay relative to source and destination. It is crucial since the Bit Error Rate (BER) performance of the link significantly depends on the location of the selected relay. In this paper, path planning and relay allocation algorithms are combined to have a seamless high quality monitoring of the region and to provide superior Quality of Service (QoS) for audio-video applications. Also, simulations in different operation zones with a cluster of up to six UAVs are performed to verify the feasibility of the proposed algorithms both in optimality and computation time.
CLEAN: CLustering Enrichment ANalysis
Medvedovic Mario
2009-07-01
Full Text Available Abstract Background Integration of biological knowledge encoded in various lists of functionally related genes has become one of the most important aspects of analyzing genome-wide functional genomics data. In the context of cluster analysis, functional coherence of clusters established through such analyses have been used to identify biologically meaningful clusters, compare clustering algorithms and identify biological pathways associated with the biological process under investigation. Results We developed a computational framework for analytically and visually integrating knowledge-based functional categories with the cluster analysis of genomics data. The framework is based on the simple, conceptually appealing, and biologically interpretable gene-specific functional coherence score (CLEAN score. The score is derived by correlating the clustering structure as a whole with functional categories of interest. We directly demonstrate that integrating biological knowledge in this way improves the reproducibility of conclusions derived from cluster analysis. The CLEAN score differentiates between the levels of functional coherence for genes within the same cluster based on their membership in enriched functional categories. We show that this aspect results in higher reproducibility across independent datasets and produces more informative genes for distinguishing different sample types than the scores based on the traditional cluster-wide analysis. We also demonstrate the utility of the CLEAN framework in comparing clusterings produced by different algorithms. CLEAN was implemented as an add-on R package and can be downloaded at http://Clusteranalysis.org. The package integrates routines for calculating gene specific functional coherence scores and the open source interactive Java-based viewer Functional TreeView (FTreeView. Conclusion Our results indicate that using the gene-specific functional coherence score improves the reproducibility of the
Survey on Text Document Clustering
M.Thangamani; Dr.P.Thangaraj
2010-01-01
Document clustering is also referred as text clustering, and its concept is merely equal to data clustering. It is hardly difficult to find the selective information from an ‘N’number of series information, so that document clustering came into picture. Basically cluster means a group of similar data, document clustering means segregating the data into different groups of similar data. Clustering can be of mathematical, statistical or numerical domain. Clustering is a fundamental data analysi...
Pirandola, Stefano; Giovannetti, Vittorio; Mancini, Stefano; Braunstein, Samuel L
2011-01-01
The readout of a classical memory can be modelled as a problem of quantum channel discrimination, where a decoder retrieves information by distinguishing the different quantum channels encoded in each cell of the memory [S. Pirandola, Phys. Rev. Lett. 106, 090504 (2011)]. In the case of optical memories, such as CDs and DVDs, this discrimination involves lossy bosonic channels and can be remarkably boosted by the use of nonclassical light (quantum reading). Here we generalize these concepts by extending the model of memory from single-cell to multi-cell encoding. In general, information is stored in a block of cells by using a channel-codeword, i.e., a sequence of channels chosen according to a classical code. Correspondingly, the readout of data is realized by a process of "parallel" channel discrimination, where the entire block of cells is probed simultaneously and decoded via an optimal collective measurement. In the limit of an infinite block we define the quantum reading capacity of the memory, quantify...
Wireless Connectivity and Capacity
Halldorsson, Magnus M
2011-01-01
Given $n$ wireless transceivers located in a plane, a fundamental problem in wireless communications is to construct a strongly connected digraph on them such that the constituent links can be scheduled in fewest possible time slots, assuming the SINR model of interference. In this paper, we provide an algorithm that connects an arbitrary point set in $O(\\log n)$ slots, improving on the previous best bound of $O(\\log^2 n)$ due to Moscibroda. This is complemented with a super-constant lower bound on our approach to connectivity. An important feature is that the algorithms allow for bi-directional (half-duplex) communication. One implication of this result is an improved bound of $\\Omega(1/\\log n)$ on the worst-case capacity of wireless networks, matching the best bound known for the extensively studied average-case. We explore the utility of oblivious power assignments, and show that essentially all such assignments result in a worst case bound of $\\Omega(n)$ slots for connectivity. This rules out a recent cla...
Integrated flexible capacity and inventory management under flexible capacity uncertainty
Paç, Mehmet Fazıl
2006-01-01
Cataloged from PDF version of article. In a manufacturing environment with volatile demand, inventory management can be coupled with dynamic capacity adjustments for handling the fluctuations more effectively. In this study we consider the integrated management of inventory and flexible capacity management under seasonal stochastic demand and uncertain labor supply. The capacity planning problem is investigated from the workforce planning perspective. We consider a manufactu...
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Agricultural Clusters in the Netherlands
Schouten, M.A.; Heijman, W.J.M.
2012-01-01
Michael Porter was the first to use the term cluster in an economic context. He introduced the term in The Competitive Advantage of Nations (1990). The term cluster is also known as business cluster, industry cluster, competitive cluster or Porterian cluster. This article aims at determining and
Agricultural Clusters in the Netherlands
Schouten, M.A.; Heijman, W.J.M.
2012-01-01
Michael Porter was the first to use the term cluster in an economic context. He introduced the term in The Competitive Advantage of Nations (1990). The term cluster is also known as business cluster, industry cluster, competitive cluster or Porterian cluster. This article aims at determining and mea
Agricultural Clusters in the Netherlands
Schouten, M.A.; Heijman, W.J.M.
2012-01-01
Michael Porter was the first to use the term cluster in an economic context. He introduced the term in The Competitive Advantage of Nations (1990). The term cluster is also known as business cluster, industry cluster, competitive cluster or Porterian cluster. This article aims at determining and mea
Determination of Maximum Follow-up Speed of Electrode System of Resistance Projection Welders
Wu, Pei; Zhang, Wenqi; Bay, Niels
2004-01-01
the weld process settings for the stable production and high quality of products. In this paper, the maximum follow-up speed of electrode system was tested by using a special designed device which can be mounted to all types of machine and easily to be applied in industry, the corresponding mathematical......The maximum follow-up speed of electrode system represents the dynamic mechanical response capacity of resistance projection welding machines, which is important to make the diffrernce from one machine to the other and to consider the individual behavior of machines in designing or optimizing...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Available transmission capacity assessment
Škokljev Ivan
2012-01-01
Full Text Available Effective power system operation requires the analysis of vast amounts of information. Power market activities expose power transmission networks to high-level power transactions that threaten normal, secure operation of the power system. When there are service requests for a specific sink/source pair in a transmission system, the transmission system operator (TSO must allocate the available transfer capacity (ATC. It is common that ATC has a single numerical value. Additionally, the ATC must be calculated for the base case configuration of the system, while generation dispatch and topology remain unchanged during the calculation. Posting ATC on the internet should benefit prospective users by aiding them in formulating their requests. However, a single numerical value of ATC offers little for prospect for analysis, planning, what-if combinations, etc. A symbolic approach to the power flow problem (DC power flow and ATC offers a numerical computation at the very end, whilst the calculation beforehand is performed by using symbols for the general topology of the electrical network. Qualitative analysis of the ATC using only qualitative values, such as increase, decrease or no change, offers some new insights into ATC evaluation, multiple transactions evaluation, value of counter-flows and their impact etc. Symbolic analysis in this paper is performed after the execution of the linear, symbolic DC power flow. As control variables, the mathematical model comprises linear security constraints, ATC, PTDFs and transactions. The aim is to perform an ATC sensitivity study on a five nodes/seven lines transmission network, used for zonal market activities tests. A relatively complicated environment with twenty possible bilateral transactions is observed.
Peak capacity in unidimensional chromatography.
Neue, Uwe Dieter
2008-03-14
The currently existing knowledge about peak capacity in unidimensional separations is reviewed. The majority of the paper is dedicated to reversed-phase gradient chromatography, covering specific techniques as well as the subject of peak compression. Other sections deal with peak capacity in isocratic chromatography, size-exclusion chromatography and ion-exchange chromatography. An important topic is the limitation of the separation power and the meaning of the concept of peak capacity for real applications.
Theory of Electrorotation of Clustered Colloidal Particles
LIU Ren-Ming; HUANG Ji-Ping
2005-01-01
When a colloidal suspension is exposed to a strong rotating electric field, an aggregation of the suspended particles is induced to appear. In such clusters, the separation between the suspended particles is so close that one could not neglect the multiple image effect on the electrorotation (ER) spectrum. Since so far the exact multiple image method exists in two dimensions only, rather than in three dimensions, we investigate the ER spectrum of the clustered colloidal particles in two dimensions, in which many cylindrical particles are randomly distributed in a sheet cluster. We report the dependence of the ER spectrum on the materialparameters. It is shown that the multiple image method predicts two characteristic frequencies, at which the rotation speed reaches maximum. To this end, the multiple image method is numerically demonstrated to be in good agreement with the known Maxwell-Garnett approximation.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
On Cellular MIMO Channel Capacity
Adachi, Koichi; Adachi, Fumiyuki; Nakagawa, Masao
To increase the transmission rate without bandwidth expansion, the multiple-input multiple-output (MIMO) technique has recently been attracting much attention. The MIMO channel capacity in a cellular system is affected by the interference from neighboring co-channel cells. In this paper, we introduce the cellular channel capacity and evaluate its outage capacity, taking into account the frequency-reuse factor, path loss exponent, standard deviation of shadowing loss, and transmission power of a base station (BS). Furthermore, we compare the cellular MIMO downlink channel capacity with those of other multi-antenna transmission techniques such as single-input multiple-output (SIMO) and space-time block coded multiple-input single-output (STBC-MISO). We show that the optimum frequency-reuse factor F that maximizes 10%-outage capacity is 3 and both 50%- and 90%-outage capacities is 1 irrespective of the type of multi-antenna transmission technique, where q%-outage capacity is defined as the channel capacity that gives an outage probability of q%. We also show that the cellular MIMO channel capacity is always higher than those of SIMO and STBC-MISO.
Capacity Building in Land Management
Enemark, Stig; Ahene, Rexford
2003-01-01
There is a significant need for capacity building in the interdisciplinary area of land management especially in developing countries and countries in transition, to deal with the complex issues of building efficient land information systems and sustainable institutional infrastructures. Capacity...... development in this area. Furthermore, capacity building should ensure that the focus is on building sound institutions and governance rather than just high-level IT-infrastructures. This overall approach to capacity building in land management is used for implementing a new land policy reform in Malawi...
Low-SNR Capacity of MIMO Optical Intensity Channels
Chaaban, Anas
2017-09-18
The capacity of the multiple-input multiple-output (MIMO) optical intensity channel is studied, under both average and peak intensity constraints. We focus on low SNR, which can be modeled as the scenario where both constraints proportionally vanish, or where the peak constraint is held constant while the average constraint vanishes. A capacity upper bound is derived, and is shown to be tight at low SNR under both scenarios. The capacity achieving input distribution at low SNR is shown to be a maximally-correlated vector-binary input distribution. Consequently, the low-SNR capacity of the channel is characterized. As a byproduct, it is shown that for a channel with peak intensity constraints only, or with peak intensity constraints and individual (per aperture) average intensity constraints, a simple scheme composed of coded on-off keying, spatial repetition, and maximum-ratio combining is optimal at low SNR.
Fast Forward Maximum entropy reconstruction of sparsely sampled data.
Balsgart, Nicholas M; Vosegaard, Thomas
2012-10-01
We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512-75)×2=874 points per ν(2) slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ∼450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.
Cluster infall in the concordance LCDM model
Pivato, M C; Lambas, D G; Pivato, Maximiliano C.; Padilla, Nelson D.; Lambas, Diego G.
2005-01-01
We perform statistical analyses of the infall of dark-matter onto clusters in numerical simulations within the concordance LCDM model. By studying the infall profile around clusters of different mass, we find a linear relation between the maximum infall velocity and mass which reach 900km/s for the most massive groups. The maximum infall velocity and the group mass follow a suitable power law fit of the form, V_{inf}^{max} = (M/m_0)^{gamma}. By comparing the measured infall velocity to the linear infall model with an exponential cutoff introduced by Croft et al., we find that the best agreement is obtained for a critical overdensity delta_c = 45. We study the dependence of the direction of infall with respect to the cluster centres, and find that in the case of massive groups, the maximum alignment occurs at scales r ~ 6Mpc/h. We obtain a logarithmic power-law relation between the average infall angle and the group mass. We also study the dependence of the results on the local dark-matter density, finding a r...
Capacity Measurement with the UIC 406 Capacity Method
Landex, Alex; Schittenhelm, Bernd; Kaas, Anders H.
2008-01-01
This article describes the fast and effective UIC 406 method for calculating capacity consumption on railway lines. It is possible to expound the UIC 406 method in different ways which can lead to different capacity consumptions. Therefore, this article describes how the methodology is expounded...
Lund-Thomsen, Peter; Pillay, Renginee G.
2012-01-01
Purpose – The paper seeks to review the literature on CSR in industrial clusters in developing countries, identifying the main strengths, weaknesses, and gaps in this literature, pointing to future research directions and policy implications in the area of CSR and industrial cluster development...... in this field and their comments incorporated in the final version submitted to Corporate Governance. Findings – The article traces the origins of the debate on industrial clusters and CSR in developing countries back to the early 1990s when clusters began to be seen as an important vehicle for local economic...... development in the South. At the turn of the millennium the industrial cluster debate expanded as clusters were perceived as a potential source of poverty reduction, while their role in promoting CSR among small and medium-sized enterprises began to take shape from 2006 onwards. At present, there is still...
Cosmology with cluster surveys
Subhabrata Majumdar
2004-10-01
Surveys of clusters of galaxies provide us with a powerful probe of the density and nature of the dark energy. The red-shift distribution of detected clusters is highly sensitive to the dark energy equation of state parameter . Upcoming Sunyaev–Zel'dovich (SZ) surveys would provide us large yields of clusters to very high red-shifts. Self-calibration of cluster scaling relations, possible for such a huge sample, would be able to constrain systematic biases on mass estimators. Combining cluster red-shift abundance with limited mass follow-up and cluster mass power spectrum can then give constraints on , as well as on 8 and to a few per cents.
Disentangling Porterian Clusters
Jagtfelt, Tue
This dissertation investigates the contemporary phenomenon of industrial clusters based on the work of Michael E. Porter, the central progenitor and promoter of the cluster notion. The dissertation pursues two central questions: 1) What is a cluster? and 2) How could Porter’s seemingly fuzzy......, contested theory become so widely disseminated and applied as a normative and prescriptive strategy for economic development? The dissertation traces the introduction of the cluster notion into the EU’s Lisbon Strategy and demonstrates how its inclusion originates from Porter’s colleagues: Professor Örjan...... Sölvell, Dr. Christian Ketels and Dr. Göran Lindqvist. Taking departure in Porter’s works and the cluster literature, the dissertations shows a considerable paradigmatic shift has occurred from the first edition of Nations to the present state of cluster cooperation. To elaborate on this change...
Mathieu, Claire; Schudy, Warren
2010-01-01
We study the online clustering problem where data items arrive in an online fashion. The algorithm maintains a clustering of data items into similarity classes. Upon arrival of v, the relation between v and previously arrived items is revealed, so that for each u we are told whether v is similar to u. The algorithm can create a new cluster for v and merge existing clusters. When the objective is to minimize disagreements between the clustering and the input, we prove that a natural greedy algorithm is O(n)-competitive, and this is optimal. When the objective is to maximize agreements between the clustering and the input, we prove that the greedy algorithm is .5-competitive; that no online algorithm can be better than .834-competitive; we prove that it is possible to get better than 1/2, by exhibiting a randomized algorithm with competitive ratio .5+c for a small positive fixed constant c.
Cluster Management Institutionalization
Normann, Leo; Agger Nielsen, Jeppe
2015-01-01
This article explores a new management form – cluster management – in Danish public sector day care. Although cluster management has been widely adopted in Danish day care at the municipality level, it has attracted only sparse research attention. We use theoretical insights from Scandinavian...... institutionalism together with a longitudinal case-based inquiry into how cluster management has entered and penetrated the management practices of day care in Denmark. We demonstrate how cluster management became widely adopted in the day care field not only because of its intrinsic properties but also because...... of how it was legitimized as a “ready-to-use” management model. Further, our account reveals how cluster management translated into considerably different local variants as it travelled into specific organizations. However, these processes have not occurred sequentially with cluster management first...
Cluster Correspondence Analysis.
van de Velden, M; D'Enza, A Iodice; Palumbo, F
2017-03-01
A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.
Cluster Management Institutionalization
Normann, Leo; Agger Nielsen, Jeppe
2015-01-01
of how it was legitimized as a “ready-to-use” management model. Further, our account reveals how cluster management translated into considerably different local variants as it travelled into specific organizations. However, these processes have not occurred sequentially with cluster management first......This article explores a new management form – cluster management – in Danish public sector day care. Although cluster management has been widely adopted in Danish day care at the municipality level, it has attracted only sparse research attention. We use theoretical insights from Scandinavian...... institutionalism together with a longitudinal case-based inquiry into how cluster management has entered and penetrated the management practices of day care in Denmark. We demonstrate how cluster management became widely adopted in the day care field not only because of its intrinsic properties but also because...
Clustering Categorical Data:A Cluster Ensemble Approach
He Zengyou(何增友); Xu Xiaofei; Deng Shengchun
2003-01-01
Clustering categorical data, an integral part of data mining,has attracted much attention recently. In this paper, the authors formally define the categorical data clustering problem as an optimization problem from the viewpoint of cluster ensemble, and apply cluster ensemble approach for clustering categorical data. Experimental results on real datasets show that better clustering accuracy can be obtained by comparing with existing categorical data clustering algorithms.
Spatial Scan Statistic: Selecting clusters and generating elliptic clusters
Christiansen, Lasse Engbo; Andersen, Jens Strodl
2004-01-01
The spatial scan statistic is widely used to search for clusters. This paper shows that the usually applied elimination of overlapping clusters to find secondary clusters is sensitive to smooth changes in the shape of the clusters. We present an algorithm for generation of set of confocal elliptic...... clusters. In addition, we propose a new way to present the information in a given set of clusters based on the significance of the clusters....
Fuzzy C-Means Clustering and Energy Efficient Cluster Head Selection for Cooperative Sensor Network
Bhatti, Dost Muhammad Saqib; Saeed, Nasir; Nam, Haewoon
2016-01-01
We propose a novel cluster based cooperative spectrum sensing algorithm to save the wastage of energy, in which clusters are formed using fuzzy c-means (FCM) clustering and a cluster head (CH) is selected based on a sensor’s location within each cluster, its location with respect to fusion center (FC), its signal-to-noise ratio (SNR) and its residual energy. The sensing information of a single sensor is not reliable enough due to shadowing and fading. To overcome these issues, cooperative spectrum sensing schemes were proposed to take advantage of spatial diversity. For cooperative spectrum sensing, all sensors sense the spectrum and report the sensed energy to FC for the final decision. However, it increases the energy consumption of the network when a large number of sensors need to cooperate; in addition to that, the efficiency of the network is also reduced. The proposed algorithm makes the cluster and selects the CHs such that very little amount of network energy is consumed and the highest efficiency of the network is achieved. Using the proposed algorithm maximum probability of detection under an imperfect channel is accomplished with minimum energy consumption as compared to conventional clustering schemes. PMID:27618061
Cristiani, S; D'Odorico, V; Fontana, A; Giallongo, E; Moscardini, L; Savaglio, S
1997-01-01
The observed clustering of Lyman-$\\alpha$ lines is reviewed and compared with the clustering of CIV systems. We argue that a continuity of properties exists between Lyman-$\\alpha$ and metal systems and show that the small-scale clustering of the absorbers is consistent with a scenario of gravitationally induced correlations. At large scales statistically significant over and under-densities (including voids) are found on scales of tens of Mpc.
Clustering Techniques in Bioinformatics
Muhammad Ali Masood
2015-01-01
Full Text Available Dealing with data means to group information into a set of categories either in order to learn new artifacts or understand new domains. For this purpose researchers have always looked for the hidden patterns in data that can be defined and compared with other known notions based on the similarity or dissimilarity of their attributes according to well-defined rules. Data mining, having the tools of data classification and data clustering, is one of the most powerful techniques to deal with data in such a manner that it can help researchers identify the required information. As a step forward to address this challenge, experts have utilized clustering techniques as a mean of exploring hidden structure and patterns in underlying data. Improved stability, robustness and accuracy of unsupervised data classification in many fields including pattern recognition, machine learning, information retrieval, image analysis and bioinformatics, clustering has proven itself as a reliable tool. To identify the clusters in datasets algorithm are utilized to partition data set into several groups based on the similarity within a group. There is no specific clustering algorithm, but various algorithms are utilized based on domain of data that constitutes a cluster and the level of efficiency required. Clustering techniques are categorized based upon different approaches. This paper is a survey of few clustering techniques out of many in data mining. For the purpose five of the most common clustering techniques out of many have been discussed. The clustering techniques which have been surveyed are: K-medoids, K-means, Fuzzy C-means, Density-Based Spatial Clustering of Applications with Noise (DBSCAN and Self-Organizing Map (SOM clustering.
Escalera, E; Girardi, M; Giuricin, G; Mardirossian, F; Mazure, A; Mezzetti, M
1993-01-01
The analysis of the presence of substructures in 16 well-sampled clusters of galaxies suggests a stimulating hypothesis: Clusters could be classified as unimodal or bimodal, on the basis of to the sub-clump distribution in the {\\em 3-D} space of positions and velocities. The dynamic study of these clusters shows that their fundamental characteristics, in particular the virial masses, are not severely biased by the presence of subclustering if the system considered is bound.
Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model
Yang, Yuefang; Gan, Chunhui; Shen, Tingting
2017-05-01
In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.
Fault Detection and Recovery in Wireless Sensor Network Using Clustering
Abolfazl Akbari
2011-02-01
Full Text Available Some WSN by a lot of immobile node and with the limited energy and without furthercharge of energy. Whereas extension of many sensor nodes and their operation. Hence it isnormal.unactive nodes miss their communication in network, hence split the network. For avoidance splitof network, we proposed a fault recovery corrupted node and Self Healing is necessary. In this Thesis, wedesign techniques to maintain the cluster structure in the event of failures caused by energy-drainednodes. Initially, node with the maximum residual energy in a cluster becomes cluster heed and node withthe second maximum residual energy becomes secondary cluster heed. Later on, selection of cluster heedand secondary cluster heed will be based on available residual energy. We use Matlab software assimulation platform quantities. like, energy consumption at cluster and number of clusters is computed inevaluation of proposed algorithm. Eventually we evaluated and compare this proposed method againstprevious method and we demonstrate our model is better optimization than other method such asVenkataraman, in energy consumption rate.
SAFETY-BASED CAPACITY ANALYSIS FOR CHINESE HIGHWAYS
Ping YI, Ph.D.
2004-01-01
Full Text Available Many years of research have led to the development of theories and methodologies in roadway capacity analysis in the developed countries. However, those resources coexist with roadway design and traffic control practices in the local country, and cannot be simply transferred to China for applications. For example, the Highway Capacity Manual in the United State describes roadway capacity under ideal conditions and estimates practical capacities under prevailing conditions in the field. This capacity and the conditions for change are expected to be different on Chinese roadways as the local roadway design (lane width, curves and grades, vehicle size, and traffic mix are different. This research looks into an approach to the capacity issue different from the Highway Capacity Manual. According to the car-following principle, this paper first describes the safety criteria that affect traffic operations. Several speed schemes are subsequently discussed as they are affected by the maximum speed achievable under the local conditions. The study has shown that the effect of geometric and traffic conditions can be effectually reflected in the maximum speed adopted by the drivers. For most Chinese highways without a posted speed limit, the choice of speed by the drivers from the safety prospective is believed to have incorporated considerations of the practical driving conditions. Based on this, a condition for capacity calculation is obtained by comparing the desired vs. safety-based distance headways. The formulations of the model are mathematically sound and physically meaningful, and preliminary testing of the model is encouraging. Future research includes field data acquisition for calibration and adjustment, and model testing on Chinese highways.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Evaluation model for safety capacity of chemical industrial park based on acceptable regional risk
Guohua Chen; Shukun Wang; Xiaoqun Tan
2015-01-01
The paper defines the Safety Capacity of Chemical Industrial Park (SCCIP) from the perspective of acceptable regional risk. For the purpose of exploring the evaluation model for the SCCIP, a method based on quantitative risk assessment was adopted for evaluating transport risk and to confirm reasonable safety transport capacity of chemical industrial park, and then by combining with the safety storage capacity, a SCCIP evaluation model was put forward. The SCCIP was decided by the smaller one between the largest safety storage capacity and the maximum safety transport capacity, or else, the regional risk of the park will exceed the acceptable level. The developed method was applied to a chemical industrial park in Guangdong province to obtain the maximum safety transport capacity and the SCCIP. The results can be realized in the regional risk control of the park effectively.
The Youngest Globular Clusters
Beck, Sara
2014-01-01
It is likely that all stars are born in clusters, but most clusters are not bound and disperse. None of the many protoclusters in our Galaxy are likely to develop into long-lived bound clusters. The Super Star Clusters (SSCs) seen in starburst galaxies are more massive and compact and have better chances of survival. The birth and early development of SSCs takes place deep in molecular clouds, and during this crucial stage the embedded clusters are invisible to optical or UV observations but are studied via the radio-infared supernebulae (RISN) they excite. We review observations of embedded clusters and identify RISN within 10 Mpc whose exciting clusters have a million solar masses or more in volumes of a few cubic parsecs and which are likely to not only survive as bound clusters, but to evolve into objects as massive and compact as Galactic globulars. These clusters are distinguished by very high star formation efficiency eta, at least a factor of 10 higher than the few percent seen in the Galaxy, probably...
Perez, Adrianna; Moreno, Jorge; Naiman, Jill; Ramirez-Ruiz, Enrico; Hopkins, Philip F.
2017-01-01
In this work, we analyze the environments surrounding star clusters of simulated merging galaxies. Our framework employs Feedback In Realistic Environments (FIRE) model (Hopkins et al., 2014). The FIRE project is a high resolution cosmological simulation that resolves star forming regions and incorporates stellar feedback in a physically realistic way. The project focuses on analyzing the properties of the star clusters formed in merging galaxies. The locations of these star clusters are identified with astrodendro.py, a publicly available dendrogram algorithm. Once star cluster properties are extracted, they will be used to create a sub-grid (smaller than the resolution scale of FIRE) of gas confinement in these clusters. Then, we can examine how the star clusters interact with these available gas reservoirs (either by accreting this mass or blowing it out via feedback), which will determine many properties of the cluster (star formation history, compact object accretion, etc). These simulations will further our understanding of star formation within stellar clusters during galaxy evolution. In the future, we aim to enhance sub-grid prescriptions for feedback specific to processes within star clusters; such as, interaction with stellar winds and gas accretion onto black holes and neutron stars.
Forman, W; Markevitch, M L; Vikhlinin, A A; Churazov, E
2002-01-01
We discuss Chandra results related to 1) cluster mergers and cold fronts and 2) interactions between relativistic plasma and hot cluster atmospheres. We describe the properties of cold fronts using NGC1404 in the Fornax cluster and A3667 as examples. We discuss multiple surface brightness discontinuities in the cooling flow cluster ZW3146. We review the supersonic merger underway in CL0657. Finally, we summarize the interaction between plasma bubbles produced by AGN and hot gas using M87 and NGC507 as examples.
Laakso, Harri; Escoubet, C. Philippe; The Cluster Active Archive : Studying the Earth’s Space Plasma Environment
2010-01-01
Since the year 2000 the ESA Cluster mission has been investigating the small-scale structures and processes of the Earth's plasma environment, such as those involved in the interaction between the solar wind and the magnetospheric plasma, in global magnetotail dynamics, in cross-tail currents, and in the formation and dynamics of the neutral line and of plasmoids. This book contains presentations made at the 15th Cluster workshop held in March 2008. It also presents several articles about the Cluster Active Archive and its datasets, a few overview papers on the Cluster mission, and articles reporting on scientific findings on the solar wind, the magnetosheath, the magnetopause and the magnetotail.
Reach capacity in older women submitted to flexibility training
Elciana de Paiva Lima Vieira
2015-11-01
Full Text Available The aim of this study was to analyze the effect of flexibility training on the maximum range of motion levels and reach capacity of older women practitioners of aquatic exercises of the Prev-Quedas project. Participants were divided into two groups: intervention (IG, n = 25, which were submitted to flexibility training program and control (CG, n = 21, in which older women participated only in aquatic exercises. Flexibility training lasted three months with weekly frequency of two days, consisting of stretching exercises involving trunk and lower limbs performed after aquatic exercises. The stretching method used was passive static. Assessment consisted of the functional reach, lateral and goniometric tests. Statistical analysis was performed using the following tests: Shapiro-Wilk normality, ANCOVA, Pearson and Spearman correlations. Significant results for GI in gains of maximum range of motion for the right hip joint (p = 0.0025, however, the same result was not observed in other joints assessed, and there was no improvement in functional and lateral reach capacity for both groups. Significant correlations between reach capacity and range of motion in the trunk, hip and ankle were not observed. Therefore, flexibility training associated with the practice of aquatic exercises promoted increased maximum range of motion only for the hip joint; however, improvement in the reach capacity was not observed. The practice of aquatic exercises alone did not show significant results.
Cluster Analysis in Patients with GOLD 1 Chronic Obstructive Pulmonary Disease.
Philippe Gagnon
Full Text Available We hypothesized that heterogeneity exists within the Global Initiative for Chronic Obstructive Lung Disease (GOLD 1 spirometric category and that different subgroups could be identified within this GOLD category.Pre-randomization study participants from two clinical trials were symptomatic/asymptomatic GOLD 1 chronic obstructive pulmonary disease (COPD patients and healthy controls. A hierarchical cluster analysis used pre-randomization demographics, symptom scores, lung function, peak exercise response and daily physical activity levels to derive population subgroups.Considerable heterogeneity existed for clinical variables among patients with GOLD 1 COPD. All parameters, except forced expiratory volume in 1 second (FEV1/forced vital capacity (FVC, had considerable overlap between GOLD 1 COPD and controls. Three-clusters were identified: cluster I (18 [15%] COPD patients; 105 [85%] controls; cluster II (45 [80%] COPD patients; 11 [20%] controls; and cluster III (22 [92%] COPD patients; 2 [8%] controls. Apart from reduced diffusion capacity and lower baseline dyspnea index versus controls, cluster I COPD patients had otherwise preserved lung volumes, exercise capacity and physical activity levels. Cluster II COPD patients had a higher smoking history and greater hyperinflation versus cluster I COPD patients. Cluster III COPD patients had reduced physical activity versus controls and clusters I and II COPD patients, and lower FEV1/FVC versus clusters I and II COPD patients.The results emphasize heterogeneity within GOLD 1 COPD, supporting an individualized therapeutic approach to patients.www.clinicaltrials.gov. NCT01360788 and NCT01072396.
The little-studied cluster Berkeley 90. III. Cluster parameters
Marco, Amparo
2016-01-01
The open cluster Berkeley 90 is the home to one of the most massive binary systems in the Galaxy, LS III +46$^{\\circ}$11, formed by two identical, very massive stars (O3.5 If* + O3.5 If*), and a second early-O system (LS III +46$^{\\circ}$12 with an O4.5 IV((f)) component at least). Stars with spectral types earlier than O4 are very scarce in the Milky Way, with no more than 20 examples. The formation of such massive stars is still an open question today, and thus the study of the environments where the most massive stars are found can shed some light on this topic. To this aim, we determine the properties and characterize the population of Berkeley 90 using optical, near-infrared and WISE photometry and optical spectroscopy. This is the first determination of these parameters with accuracy. We find a distance of $3.5^{+0.5}_{-0.5}$ kpc and a maximum age of 3 Ma. The cluster mass is around $1000$ $M_{\\odot}$ (perhaps reaching $1500$ $M_{\\odot}$ if the surrounding population is added), and we do not detect cand...
Validity of Selected Lab and Field Tests of Physical Working Capacity.
Burke, Edmund J.
The validity of selected lab and field tests of physical working capacity was investigated. Forty-four male college students were administered a series of lab and field tests of physical working capacity. Lab tests include a test of maximum oxygen uptake, the PWC 170 test, the Harvard Step Test, the Progressive Pulse Ratio Test, Margaria Test of…
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Statistical properties of convex clustering
Tan, Kean Ming; Witten, Daniela
2015-01-01
In this manuscript, we study the statistical properties of convex clustering. We establish that convex clustering is closely related to single linkage hierarchical clustering and $k$-means clustering. In addition, we derive the range of the tuning parameter for convex clustering that yields a non-trivial solution. We also provide an unbiased estimator of the degrees of freedom, and provide a finite sample bound for the prediction error for convex clustering. We compare convex clustering to so...
A Model of Grid Service Capacity
Youcef Derbal
2007-01-01
Computational grids (CGs) are large scale networks of geographically distributed aggregates of resource clusters that may be contributed by distinct organizations for the provision of computing services such as model simulation, compute cycle and data mining. Traditionally, the decision-making strategies underlying the grid management mechanisms rely on the physical view of the grid resource model. This entails the need for complex multi-dimensional search strategies and a considerable level of resource state information exchange between the grid management domains. In this paper we argue that with the adoption of service oriented grid architectures, a logical service-oriented view of the resource model provides more appropriate level of abstraction to express the grid capacity to handle incoming service requests. In this respect,we propose a quantification model of the aggregated service capacity of the hosting environment that is updated based on the monitored state of the wrious environmental resources required by the hosted services. A comparative experimental validation of the model shows its performance towards enabling an adequate exploitation of provisioned services.
Document Clustering Based on Semi-Supervised Term Clustering
Hamid Mahmoodi
2012-05-01
Full Text Available The study is conducted to propose a multi-step feature (term selection process and in semi-supervised fashion, provide initial centers for term clusters. Then utilize the fuzzy c-means (FCM clustering algorithm for clustering terms. Finally assign each of documents to closest associated term clusters. While most text clustering algorithms directly use documents for clustering, we propose to first group the terms using FCM algorithm and then cluster documents based on terms clusters. We evaluate effectiveness of our technique on several standard text collections and compare our results with the some classical text clustering algorithms.
Fiscal Capacity Equalisation in Tanzania
Allers, Maarten A.; Ishemoi, Lewis J.
2010-01-01
Fiscal equalisation aims at enabling decentralised governments to supply similar services at similar tax rates. In order to equalise fiscal disparities, differences in both fiscal capacities and in fiscal needs have to be measured. This paper focuses on the measurement of fiscal capacity in a develo
Improving African health research capacity
Lazarus, Jeff; Wallace, Samantha A; Liljestrand, Jerker
2010-01-01
The issue of strengthening local research capacity in Africa is again high on the health and development agenda. The latest initiative comes from the Wellcome Trust. But when it comes to capacity development, one of the chief obstacles that health sectors in the region must confront is the migrat...
Fiscal Capacity Equalisation in Tanzania
Allers, Maarten A.; Ishemoi, Lewis J.
2010-01-01
Fiscal equalisation aims at enabling decentralised governments to supply similar services at similar tax rates. In order to equalise fiscal disparities, differences in both fiscal capacities and in fiscal needs have to be measured. This paper focuses on the measurement of fiscal capacity in a develo
Checking Capacity for MIMO Configurations
Thaysen, Jesper; Jakobsen, Kaj Bjarne
2007-01-01
Wireless system capacity can be added by increasing the number of antennas in a MIMO setup or by carefully optimizing the performance of a smaller number of antennas.......Wireless system capacity can be added by increasing the number of antennas in a MIMO setup or by carefully optimizing the performance of a smaller number of antennas....
Information capacity of quantum observable
Holevo, A S
2011-01-01
In this paper we consider the classical capacities of quantum-classical channels corresponding to measurement of observables. Special attention is paid to the case of continuous observables. We give the formulas for unassisted and entanglement-assisted classical capacities $C,C_{ea}$ and consider some explicitly solvable cases which give new examples of entanglement-breaking channels with $C_{ea}>C.$
Capacity Building in Land Administration
Enemark, Stig; Williamson, I
2004-01-01
Capacity building increasingly seen as a key component of land administration projects in developing and countries in transition undertaken by the international development banks and individual country development assistance agencies. However, the capacity building concept is often used within...... infrastructures for implementing land policies in a sustainable way. Where a project is established to create land administration infrastructures in developing or transition countries, it is critical that capacity building is a mainstream component, not as an add-on, which is often the case. In fact such projects...... should be dealt with as capacity building projects in themselves. The article introduces a conceptual analytical framework that provides some guidance when dealing with capacity building for land administration in support of a broader land policy agenda....
Brazilian Cardiorespiratory Fitness Classification Based on Maximum Oxygen Consumption
Herdy, Artur Haddad; Caixeta, Ananda
2016-01-01
Background Cardiopulmonary exercise test (CPET) is the most complete tool available to assess functional aerobic capacity (FAC). Maximum oxygen consumption (VO2 max), an important biomarker, reflects the real FAC. Objective To develop a cardiorespiratory fitness (CRF) classification based on VO2 max in a Brazilian sample of healthy and physically active individuals of both sexes. Methods We selected 2837 CEPT from 2837 individuals aged 15 to 74 years, distributed as follows: G1 (15 to 24); G2 (25 to 34); G3 (35 to 44); G4 (45 to 54); G5 (55 to 64) and G6 (65 to 74). Good CRF was the mean VO2 max obtained for each group, generating the following subclassification: Very Low (VL): VO2 105%. Results Men VL 105% G1 53.13 G2 49.77 G3 47.67 G4 42.52 G5 37.06 G6 31.50 Women G1 40.85 G2 40.01 G3 34.09 G4 32.66 G5 30.04 G6 26.36 Conclusions This chart stratifies VO2 max measured on a treadmill in a robust Brazilian sample and can be used as an alternative for the real functional evaluation of physically and healthy individuals stratified by age and sex. PMID:27305285
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein
2001-02-01
The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with conventional landfills. This is the highest methane recovery rate per unit waste, and thus progress toward stabilization, documented anywhere for such a large waste mass. This high recovery rate is attributed to moisture, and elevated temperature attained inexpensively during startup. Economic analyses performed under Phase I of this NETL contract indicate ''greenhouse cost effectiveness'' to be excellent. Other benefits include substantial waste volume loss (over 30%) which translates to extended landfill life. Other environmental benefits include rapidly improved quality and stabilization (lowered pollutant levels) in liquid leachate which drains from the waste.
Paddle River Dam : review of probable maximum flood
Clark, D. [UMA Engineering Ltd., Edmonton, AB (Canada); Neill, C.R. [Northwest Hydraulic Consultants Ltd., Edmonton, AB (Canada)
2008-07-01
The Paddle River Dam was built in northern Alberta in the mid 1980s for flood control. According to the 1999 Canadian Dam Association (CDA) guidelines, this 35 metre high, zoned earthfill dam with a spillway capacity sized to accommodate a probable maximum flood (PMF) is rated as a very high hazard. At the time of design, it was estimated to have a peak flow rate of 858 centimetres. A review of the PMF in 2002 increased the peak flow rate to 1,890 centimetres. In light of a 2007 revision of the CDA safety guidelines, the PMF was reviewed and the inflow design flood (IDF) was re-evaluated. This paper discussed the levels of uncertainty inherent in PMF determinations and some difficulties encountered with the SSARR hydrologic model and the HEC-RAS hydraulic model in unsteady mode. The paper also presented and discussed the analysis used to determine incremental damages, upon which a new IDF of 840 m{sup 3}/s was recommended. The paper discussed the PMF review, modelling methodology, hydrograph inputs, and incremental damage of floods. It was concluded that the PMF review, involving hydraulic routing through the valley bottom together with reconsideration of the previous runoff modeling provides evidence that the peak reservoir inflow could reasonably be reduced by approximately 20 per cent. 8 refs., 5 tabs., 8 figs.
Star formation in dense clusters
Myers, Philip C
2011-01-01
A model of core-clump accretion with equally likely stopping describes star formation in the dense parts of clusters, where models of isolated collapsing cores may not apply. Each core accretes at a constant rate onto its protostar, while the surrounding clump gas accretes as a power of protostar mass. Short accretion flows resemble Shu accretion, and make low-mass stars. Long flows resemble reduced Bondi accretion and make massive stars. Accretion stops due to environmental processes of dynamical ejection, gravitational competition, and gas dispersal by stellar feedback, independent of initial core structure. The model matches the field star IMF from 0.01 to more than 10 solar masses. The core accretion rate and the mean accretion duration set the peak of the IMF, independent of the local Jeans mass. Massive protostars require the longest accretion durations, up to 0.5 Myr. The maximum protostar luminosity in a cluster indicates the mass and age of its oldest protostar. The distribution of protostar luminosi...
Logistics Enterprise Evaluation Model Based On Fuzzy Clustering Analysis
Fu, Pei-hua; Yin, Hong-bo
In this thesis, we introduced an evaluation model based on fuzzy cluster algorithm of logistics enterprises. First of all,we present the evaluation index system which contains basic information, management level, technical strength, transport capacity,informatization level, market competition and customer service. We decided the index weight according to the grades, and evaluated integrate ability of the logistics enterprises using fuzzy cluster analysis method. In this thesis, we introduced the system evaluation module and cluster analysis module in detail and described how we achieved these two modules. At last, we gave the result of the system.
Gottlieb, S
2001-01-01
Small Beowulf clusters can effectively serve as personal or group supercomputers. In such an environment, a cluster can be optimally designed for a specific problem (or a small set of codes). We discuss how theoretical analysis of the code and benchmarking on similar hardware lead to optimal systems.
[Cluster headache differential diagnosis].
Guégan-Massardier, Evelyne; Laubier, Cécile
2015-11-01
Cluster headache is characterized by disabling stereotyped headache. Early diagnosis allows appropriate treatment, unfortunately diagnostic errors are frequent. The main differential diagnoses are other primary or essential headaches. Migraine, more frequent and whose diagnosis is carried by excess, trigeminal neuralgia or other trigemino-autonomic cephalgia. Vascular or tumoral underlying condition can mimic cluster headache, neck and brain imaging is recommended, ideally MRI.
1999-01-01
Atlas Image mosaic, covering 34' x 34' on the sky, of the Coma cluster, aka Abell 1656. This is a particularly rich cluster of individual galaxies (over 1000 members), most prominently the two giant ellipticals, NGC 4874 (right) and NGC 4889 (left). The remaining members are mostly smaller ellipticals, but spiral galaxies are also evident in the 2MASS image. The cluster is seen toward the constellation Coma Berenices, but is actually at a distance of about 100 Mpc (330 million light years, or a redshift of 0.023) from us. At this distance, the cluster is in what is known as the 'Hubble flow,' or the overall expansion of the Universe. As such, astronomers can measure the Hubble Constant, or the universal expansion rate, based on the distance to this cluster. Large, rich clusters, such as Coma, allow astronomers to measure the 'missing mass,' i.e., the matter in the cluster that we cannot see, since it gravitationally influences the motions of the member galaxies within the cluster. The near-infrared maps the overall luminous mass content of the member galaxies, since the light at these wavelengths is dominated by the more numerous older stellar populations. Galaxies, as seen by 2MASS, look fairly smooth and homogeneous, as can be seen from the Hubble 'tuning fork' diagram of near-infrared galaxy morphology. Image mosaic by S. Van Dyk (IPAC).
Huang, Yifen
2010-01-01
Mixed-initiative clustering is a task where a user and a machine work collaboratively to analyze a large set of documents. We hypothesize that a user and a machine can both learn better clustering models through enriched communication and interactive learning from each other. The first contribution or this thesis is providing a framework of…
Cluster Synchronization Algorithms
Xia, Weiguo; Cao, Ming
2010-01-01
This paper presents two approaches to achieving cluster synchronization in dynamical multi-agent systems. In contrast to the widely studied synchronization behavior, where all the coupled agents converge to the same value asymptotically, in the cluster synchronization problem studied in this paper,
Neurostimulation in cluster headache
Pedersen, Jeppe L; Barloese, Mads; Jensen, Rigmor H
2013-01-01
PURPOSE OF REVIEW: Neurostimulation has emerged as a viable treatment for intractable chronic cluster headache. Several therapeutic strategies are being investigated including stimulation of the hypothalamus, occipital nerves and sphenopalatine ganglion. The aim of this review is to provide...... effective strategy must be preferred as first-line therapy for intractable chronic cluster headache....
Securing personal network clusters
Jehangir, Assed; Heemstra de Groot, Sonia M.
2007-01-01
A Personal Network is a self-organizing, secure and private network of a user’s devices notwithstanding their geographic location. It aims to utilize pervasive computing to provide users with new and improved services. In this paper we propose a model for securing Personal Network clusters. Clusters
Yu-Bao Liu; Jia-Rong Cai; Jian Yin; Ada Wai-Chee Fu
2008-01-01
Clustering text data streams is an important issue in data mining community and has a number of applications such as news group filtering, text crawling, document organization and topic detection and tracing etc. However, most methods are similarity-based approaches and only use the TF*IDF scheme to represent the semantics of text data and often lead to poor clustering quality. Recently, researchers argue that semantic smoothing model is more efficient than the existing TF.IDF scheme for improving text clustering quality. However, the existing semantic smoothing model is not suitable for dynamic text data context. In this paper, we extend the semantic smoothing model into text data streams context firstly. Based on the extended model, we then present two online clustering algorithms OCTS and OCTSM for the clustering of massive text data streams. In both algorithms, we also present a new cluster statistics structure named cluster profile which can capture the semantics of text data streams dynamically and at the same time speed up the clustering process. Some efficient implementations for our algorithms are also given. Finally, we present a series of experimental results illustrating the effectiveness of our technique.
Müller, Emmanuel; Assent, Ira; Günnemann, Stephan;
2009-01-01
Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace c...
Structural transitions in clusters
Ghazali, A.; Lévy, J.-C. S.
1997-02-01
Monatomic clusters are studied by Monte Carlo relaxation using generalized Lennard-Jones potentials. A transition from an icosahedral symmetry to a crystalline symmetry with stacking faults is always observed. Bcc-based soft atom clusters are found to have a lower energy than the corresponding hcp and fcc ones below the melting point.
Mathematical classification and clustering
Mirkin, Boris
1996-01-01
I am very happy to have this opportunity to present the work of Boris Mirkin, a distinguished Russian scholar in the areas of data analysis and decision making methodologies. The monograph is devoted entirely to clustering, a discipline dispersed through many theoretical and application areas, from mathematical statistics and combina torial optimization to biology, sociology and organizational structures. It compiles an immense amount of research done to date, including many original Russian de velopments never presented to the international community before (for instance, cluster-by-cluster versions of the K-Means method in Chapter 4 or uniform par titioning in Chapter 5). The author's approach, approximation clustering, allows him both to systematize a great part of the discipline and to develop many in novative methods in the framework of optimization problems. The optimization methods considered are proved to be meaningful in the contexts of data analysis and clustering. The material presented in ...
Neutrosophic Hierarchical Clustering Algoritms
Rıdvan Şahin
2014-03-01
Full Text Available Interval neutrosophic set (INS is a generalization of interval valued intuitionistic fuzzy set (IVIFS, whose the membership and non-membership values of elements consist of fuzzy range, while single valued neutrosophic set (SVNS is regarded as extension of intuitionistic fuzzy set (IFS. In this paper, we extend the hierarchical clustering techniques proposed for IFSs and IVIFSs to SVNSs and INSs respectively. Based on the traditional hierarchical clustering procedure, the single valued neutrosophic aggregation operator, and the basic distance measures between SVNSs, we define a single valued neutrosophic hierarchical clustering algorithm for clustering SVNSs. Then we extend the algorithm to classify an interval neutrosophic data. Finally, we present some numerical examples in order to show the effectiveness and availability of the developed clustering algorithms.
Cool Cluster Correctly Correlated
Varganov, Sergey Aleksandrovich [Iowa State Univ., Ames, IA (United States)
2005-01-01
Atomic clusters are unique objects, which occupy an intermediate position between atoms and condensed matter systems. For a long time it was thought that physical and chemical properties of atomic dusters monotonically change with increasing size of the cluster from a single atom to a condensed matter system. However, recently it has become clear that many properties of atomic clusters can change drastically with the size of the clusters. Because physical and chemical properties of clusters can be adjusted simply by changing the cluster's size, different applications of atomic clusters were proposed. One example is the catalytic activity of clusters of specific sizes in different chemical reactions. Another example is a potential application of atomic clusters in microelectronics, where their band gaps can be adjusted by simply changing cluster sizes. In recent years significant advances in experimental techniques allow one to synthesize and study atomic clusters of specified sizes. However, the interpretation of the results is often difficult. The theoretical methods are frequently used to help in interpretation of complex experimental data. Most of the theoretical approaches have been based on empirical or semiempirical methods. These methods allow one to study large and small dusters using the same approximations. However, since empirical and semiempirical methods rely on simple models with many parameters, it is often difficult to estimate the quantitative and even qualitative accuracy of the results. On the other hand, because of significant advances in quantum chemical methods and computer capabilities, it is now possible to do high quality ab-initio calculations not only on systems of few atoms but on clusters of practical interest as well. In addition to accurate results for specific clusters, such methods can be used for benchmarking of different empirical and semiempirical approaches. The atomic clusters studied in this work contain from a few atoms
Job Oriented Monitoring Clusters
Vijayalaxmi Cigala,
2011-03-01
Full Text Available There has been a lot of development in the field of clusters and grids. Recently, the use of clusters has been on rise in every possible field. This paper proposes a system that monitors jobs onlarge computational clusters. Monitoring jobs is essential to understand how jobs are being executed. This helps us in understanding the complete life cycle of the jobs being executed on large clusters. Also, this paper describes how the information obtained by monitoring the jobs would help in increasing the overall throughput of clusters. Heuristics help in efficient job distribution among the computational nodes, thereby accomplishing fair job distribution policy. The proposed system would be capable of loadbalancing among the computational nodes, detecting failures, taking corrective actions after failure detection, job monitoring, system resource monitoring, etc.
Dydak, F; Nefedov, Y; Wotschack, J; Zhemchugov, A
2004-01-01
For a bias-free momentum measurement of TPC tracks, the correct determination of cluster positions is mandatory. We argue in particular that (i) the reconstruction of the entire longitudinal signal shape in view of longitudinal diffusion, electronic pulse shaping, and track inclination is important both for the polar angle reconstruction and for optimum r phi resolution; and that (ii) self-crosstalk of pad signals calls for special measures for the reconstruction of the z coordinate. The problem of 'shadow clusters' is resolved. Algorithms are presented for accepting clusters as 'good' clusters, and for the reconstruction of the r phi and z cluster coordinates, including provisions for 'bad' pads and pads next to sector boundaries, respectively.
Teuben, P. J.; Wolfire, M. G.; Pound, M. W.; Mundy, L. G.
We have assembled a cluster of Intel-Pentium based PCs running Linux to compute a large set of Photodissociation Region (PDR) and Dust Continuum models. For various reasons the cluster is heterogeneous, currently ranging from a single Pentium-II 333 MHz to dual Pentium-III 450 MHz CPU machines. Although this will be sufficient for our ``embarrassingly parallelizable problem'' it may present some challenges for as yet unplanned future use. In addition the cluster was used to construct a MIRIAD benchmark, and compared to equivalent Ultra-Sparc based workstations. Currently the cluster consists of 8 machines, 14 CPUs, 50GB of disk-space, and a total peak speed of 5.83 GHz, or about 1.5 Gflops. The total cost of this cluster has been about $12,000, including all cabling, networking equipment, rack, and a CD-R backup system. The URL for this project is http://dustem.astro.umd.edu.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
An Empirical Analysis of Rough Set Categorical Clustering Techniques
2017-01-01
Clustering a set of objects into homogeneous groups is a fundamental operation in data mining. Recently, many attentions have been put on categorical data clustering, where data objects are made up of non-numerical attributes. For categorical data clustering the rough set based approaches such as Maximum Dependency Attribute (MDA) and Maximum Significance Attribute (MSA) has outperformed their predecessor approaches like Bi-Clustering (BC), Total Roughness (TR) and Min-Min Roughness(MMR). This paper presents the limitations and issues of MDA and MSA techniques on special type of data sets where both techniques fails to select or faces difficulty in selecting their best clustering attribute. Therefore, this analysis motivates the need to come up with better and more generalize rough set theory approach that can cope the issues with MDA and MSA. Hence, an alternative technique named Maximum Indiscernible Attribute (MIA) for clustering categorical data using rough set indiscernible relations is proposed. The novelty of the proposed approach is that, unlike other rough set theory techniques, it uses the domain knowledge of the data set. It is based on the concept of indiscernibility relation combined with a number of clusters. To show the significance of proposed approach, the effect of number of clusters on rough accuracy, purity and entropy are described in the form of propositions. Moreover, ten different data sets from previously utilized research cases and UCI repository are used for experiments. The results produced in tabular and graphical forms shows that the proposed MIA technique provides better performance in selecting the clustering attribute in terms of purity, entropy, iterations, time, accuracy and rough accuracy. PMID:28068344
An Empirical Analysis of Rough Set Categorical Clustering Techniques.
Uddin, Jamal; Ghazali, Rozaida; Deris, Mustafa Mat
2017-01-01
Clustering a set of objects into homogeneous groups is a fundamental operation in data mining. Recently, many attentions have been put on categorical data clustering, where data objects are made up of non-numerical attributes. For categorical data clustering the rough set based approaches such as Maximum Dependency Attribute (MDA) and Maximum Significance Attribute (MSA) has outperformed their predecessor approaches like Bi-Clustering (BC), Total Roughness (TR) and Min-Min Roughness(MMR). This paper presents the limitations and issues of MDA and MSA techniques on special type of data sets where both techniques fails to select or faces difficulty in selecting their best clustering attribute. Therefore, this analysis motivates the need to come up with better and more generalize rough set theory approach that can cope the issues with MDA and MSA. Hence, an alternative technique named Maximum Indiscernible Attribute (MIA) for clustering categorical data using rough set indiscernible relations is proposed. The novelty of the proposed approach is that, unlike other rough set theory techniques, it uses the domain knowledge of the data set. It is based on the concept of indiscernibility relation combined with a number of clusters. To show the significance of proposed approach, the effect of number of clusters on rough accuracy, purity and entropy are described in the form of propositions. Moreover, ten different data sets from previously utilized research cases and UCI repository are used for experiments. The results produced in tabular and graphical forms shows that the proposed MIA technique provides better performance in selecting the clustering attribute in terms of purity, entropy, iterations, time, accuracy and rough accuracy.