Beeping a Maximal Independent Set
Afek, Yehuda; Alon, Noga; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot...
Beeping a Maximal Independent Set
Afek, Yehuda; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot, beeping nodes receive no feedback, while silent nodes can only differentiate between none of its neighbors beeping, or at least one of its neighbors beeping. We start by proving a lower bound that shows that in this model, it is not possible to locally converge to an MIS in sub-polynomial time. We then study four different relaxations of the model which allow us to circumvent the lower bound and find an MIS in polylogarithmic time. First, we show that if a polynomial upper bound on the network size is known, it is possi...
Algorithms for k-Colouring and Finding Maximal Independent Sets
Byskov, Jesper Makholm
2003-01-01
In this extended abstract, we construct algorithms that decide for a graph with n vertices whether there exists a 4-, 5- or 6-colouring of the vertices running in time O(1.7504n), O(2.1592 n) and O(2.3289n), respectively, using polynomial space. For 6- or 7-colouring we construct algorithms running...... in time (9(2.2680 n) and O(2.4023n), respectively, using exponential space. To do this, we prove that the number of maximal independent sets of size at most k (k-MIS's) in a graph is at most (d -1)dk - n am-(d -1)k for any d > 4. Eppstein shows the same bound for d = 4....
Fast Deterministic Distributed Maximal Independent Set Computation on Growth-Bounded Graphs
Kuhn, Fabian; Moscibroda, Thomas; Nieberg, Tim; Wattenhofer, Roger; Fraigniaud, Pierre
2005-01-01
The distributed complexity of computing a maximal independent set in a graph is of both practical and theoretical importance. While there exists an elegant O(log n) time randomized algorithm for general graphs, no deterministic polylogarithmic algorithm is known. In this paper, we study the problem
Simple neural-like p systems for maximal independent set selection.
Xu, Lei; Jeavons, Peter
2013-06-01
Membrane systems (P systems) are distributed computing models inspired by living cells where a collection of processors jointly achieves a computing task. The problem of maximal independent set (MIS) selection in a graph is to choose a set of nonadjacent nodes to which no further nodes can be added. In this letter, we design a class of simple neural-like P systems to solve the MIS selection problem efficiently in a distributed way. This new class of systems possesses two features that are attractive for both distributed computing and membrane computing: first, the individual processors do not need any information about the overall size of the graph; second, they communicate using only one-bit messages.
Greedy Sequential Maximal Independent Set and Matching are Parallel on Average
Blelloch, Guy; Shun, Julian
2012-01-01
The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend directly on a subset of the previous iterates (i.e. knowing that any one of a vertices neighbors is in the MIS or knowing that it has no previous neighbors is sufficient to decide its fate). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence depth of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar...
Swanepoel, Konrad J
2011-01-01
A subset of a normed space X is called equilateral if the distance between any two points is the same. Let m(X) be the smallest possible size of an equilateral subset of X maximal with respect to inclusion. We first observe that Petty's construction of a d-dimensional X of any finite dimension d >= 4 with m(X)=4 can be generalised to show that m(X\\oplus_1\\R)=4 for any X of dimension at least 2 which has a smooth point on its unit sphere. By a construction involving Hadamard matrices we then show that both m(\\ell_p) and m(\\ell_p^d) are finite and bounded above by a function of p, for all 1 1 such that m(X) <= d+1 for all d-dimensional X with Banach-Mazur distance less than c from \\ell_p^d. Using Brouwer's fixed-point theorem we show that m(X) <= d+1 for all d-\\dimensional X with Banach-Mazur distance less than 3/2 from \\ell_\\infty^d. A graph-theoretical argument furthermore shows that m(\\ell_\\infty^d)=d+1. The above results lead us to conjecture that m(X) <= 1+\\dim X.
Independent sets in chain cacti
Sedlar, Jelena
2011-01-01
In this paper chain cacti are considered. First, for two specific classes of chain cacti (orto-chains and meta-chains of cycles with h vertices) the recurrence relation for independence polynomial is derived. That recurrence relation is then used in deriving explicit expressions for independence number and number of maximum independent sets for such chains. Also, the recurrence relation for total number of independent sets for such graphs is derived. Finaly, the proof is provided that orto-chains and meta-chains are the only extremal chain cacti with respect to total number of independent sets (orto-chains minimal and meta-chains maximal).
Lisonek, Petr
1996-01-01
our classifications confirmthe maximality of previously known sets, the results in E^7 and E^8are new. Their counterpart in dimension larger than 10is a set of unit vectors with only two values of inner products in the Lorentz space R^{d,1}.The maximality of this set again follows from a bound due...
Approximate Revenue Maximization in Interdependent Value Settings
Chawla, Shuchi; Fu, Hu; Karlin, Anna
2014-01-01
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, ...
Independent sets in chain cacti
Sedlar, Jelena
2011-01-01
In this paper chain cacti are considered. First, for two specific classes of chain cacti (orto-chains and meta-chains of cycles with h vertices) the recurrence relation for independence polynomial is derived. That recurrence relation is then used in deriving explicit expressions for independence number and number of maximum independent sets for such chains. Also, the recurrence relation for total number of independent sets for such graphs is derived. Finaly, the proof is provided that orto-ch...
Strong sequences and independent sets
Joanna Jureczko
2016-05-01
Full Text Available A family $\\mathcal{S} \\in \\mathcal{P}(\\omega$ is \\textit{an independent family} if for each pair $\\mathcal{A, B}$ of disjoint finite subsets of $\\mathcal{S}$ the set $\\bigcap \\mathcal{A} \\cap (\\omega \\setminus \\bigcup \\mathcal{B}$ is nonempty. The fact that there is an independent family on $\\omega$ of size continuum was proved by Fichtenholz and Kantorowicz in \\cite{FK}. If we substitute $\\mathcal{P}(\\omega$ by a set $(X, r$ with arbitrary relation \\textit{r} it is natural question about existence and length of an independent set on $(X, r$. In this paper special assumptions of such existence will be considered. On the other hand in 60s' of the last century the strong sequences method was introduced by Efimov. He used it for proving some famous theorems in dyadic spaces like: Marczewski theorem on cellularity, Shanin theorem on a calibre, Esenin-Volpin theorem and others. In this paper there will be considered: length of strong sequences, the length of independent sets and other well known cardinal invariants and there will be examined inequalities among them.
Maximal induced paths and minimal percolating sets in hypercubes
Anil M. Shende
2015-01-01
Full Text Available For a graph $G$, the \\emph{$r$-bootstrap percolation} process can be described as follows: Start with an initial set $A$ of "infected'' vertices. Infect any vertex with at least $r$ infected neighbours, and continue this process until no new vertices can be infected. $A$ is said to \\emph{percolate in $G$} if eventually all the vertices of $G$ are infected. $A$ is a \\emph{minimal percolating set} in $G$ if $A$ percolates in $G$ and no proper subset of $A$ percolates in $G$. An induced path, $P$, in a hypercube $Q_n$ is maximal if no induced path in $Q_n$ properly contains $P$. Induced paths in hypercubes are also called snakes. We study the relationship between maximal snakes and minimal percolating sets (under 2-bootstrap percolation in hypercubes. In particular, we show that every maximal snake contains a minimal percolating set, and that every minimal percolating set is contained in a maximal snake.
Kakeya sets and directional maximal operators in the plane
Bateman, Michael
2009-01-01
We completely characterize the boundedness of planar directional maximal operators on $L^p$ . More precisely, if $\\Omega$ is a set of directions, we show that $M_{\\Omega}$ , the maximal operator associated to line segments in the directions $\\Omega$ , is unbounded on $L^p$ for all $p \\lt \\infty$ precisely when $\\Omega$ admits Kakeya-type sets. In fact, we show that if $\\Omega$ does not admit Kakeya sets, then $\\Omega$ is a generalized lacunary set, and hence, $M_{\\Omega}$ is bounded on $L^p$ ...
Feature Extraction Using Supervised Independent Component Analysis by Maximizing Class Distance
Sakaguchi, Yoshinori; Ozawa, Seiichi; Kotani, Manabu
Recently, Independent Component Analysis (ICA) has been applied to not only problems of blind signal separation, but also feature extraction of patterns. However, the effectiveness of pattern features extracted by conventional ICA algorithms depends on pattern sets; that is, how patterns are distributed in the feature space. As one of the reasons, we have pointed out that ICA features are obtained by increasing only their independence even if the class information is available. In this context, we can expect that more high-performance features can be obtained by introducing the class information into conventional ICA algorithms. In this paper, we propose a supervised ICA (SICA) that maximizes Mahalanobis distance between features of different classes as well as maximize their independence. In the first experiment, two-dimensional artificial data are applied to the proposed SICA algorithm to see how maximizing Mahalanobis distance works well in the feature extraction. As a result, we demonstrate that the proposed SICA algorithm gives good features with high separability as compared with principal component analysis and a conventional ICA. In the second experiment, the recognition performance of features extracted by the proposed SICA is evaluated using the three data sets of UCI Machine Learning Repository. From the results, we show that the better recognition accuracy is obtained using our proposed SICA. Furthermore, we show that pattern features extracted by SICA are better than those extracted by only maximizing the Mahalanobis distance.
On revenue maximization for selling multiple independently distributed items.
Li, Xinye; Yao, Andrew Chi-Chih
2013-07-09
Consider the revenue-maximizing problem in which a single seller wants to sell k different items to a single buyer, who has independently distributed values for the items with additive valuation. The k = 1 case was completely resolved by Myerson's classical work in 1981, whereas for larger k the problem has been the subject of much research efforts ever since. Recently, Hart and Nisan analyzed two simple mechanisms: selling the items separately, or selling them as a single bundle. They showed that selling separately guarantees at least a c/log2 k fraction of the optimal revenue; and for identically distributed items, bundling yields at least a c/log k fraction of the optimal revenue. In this paper, we prove that selling separately guarantees at least c/log k fraction of the optimal revenue, whereas for identically distributed items, bundling yields at least a constant fraction of the optimal revenue. These bounds are tight (up to a constant factor), settling the open questions raised by Hart and Nisan. The results are valid for arbitrary probability distributions without restrictions. Our results also have implications on other interesting issues, such as monotonicity and randomization of selling mechanisms.
An Efficient Algorithm for Mining Maximal Frequent Item Sets
A. M.J.M.Z. Rahman
2008-01-01
Full Text Available Problem Statement: In today's life, the mining of frequent patterns is a basic problem in data mining applications. The algorithms which are used to generate these frequent patterns must perform efficiently. The objective was to propose an effective algorithm which generates frequent patterns in less time. Approach: We proposed an algorithm which was based on hashing technique and combines a vertical tidset representation of the database with effective pruning mechanisms. It removes all the non-maximal frequent item-sets to get exact set of MFI directly. It worked efficiently when the number of item-sets and tid-sets is more. Results: The performance of our algorithm had been compared with recently developed MAFIA algorithm and the results show how our algorithm gives better performance. Conclusions: Hence, the proposed algorithm performs effectively and generates frequent patterns faster.
Set theory an introduction to independence proofs
Kunen, K
1984-01-01
Studies in Logic and the Foundations of Mathematics, Volume 102: Set Theory: An Introduction to Independence Proofs offers an introduction to relative consistency proofs in axiomatic set theory, including combinatorics, sets, trees, and forcing.The book first tackles the foundations of set theory and infinitary combinatorics. Discussions focus on the Suslin problem, Martin's axiom, almost disjoint and quasi-disjoint sets, trees, extensionality and comprehension, relations, functions, and well-ordering, ordinals, cardinals, and real numbers. The manuscript then ponders on well-founded sets and
Independence-friendly cylindric set algebras
Mann, Allen L
2007-01-01
Independence-friendly logic is a conservative extension of first-order logic that has the same expressive power as existential second-order logic. In her Ph.D. thesis, Dechesne introduces a variant of independence-friendly logic called IFG logic. We attempt to algebraize IFG logic in the same way that Boolean algebra is the algebra of propositional logic and cylindric algebra is the algebra of first-order logic. We define independence-friendly cylindric set algebras and prove two main results. First, every independence-friendly cylindric set algebra over a structure has an underlying Kleene algebra. Moreover, the class of such underlying Kleene algebras generates the variety of all Kleene algebras. Hence the equational theory of the class of Kleene algebras that underly an independence-friendly cylindric set algebra is finitely axiomatizable. Second, every one-dimensional independence-friendly cylindric set algebra over a structure has an underlying monadic Kleene algebra. However, the class of such underlyin...
On Matroids and Linearly Independent Set Families
2013-01-01
New families of matroids are constructed in this note. These new fami- lies are derived from the concept of linearly independent set family (LISF) introduced by Eicker and Ewald [Linear Algebra and its Applications 388 (2004) 173-191]. The proposed construction generalizes in a natural way the well known class of vectorial matroids over a field.
Maximal lattice free bodies, test sets and the Frobenius problem
Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt
Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....
Counting independent sets using the Bethe approximation
Chertkov, Michael [Los Alamos National Laboratory; Chandrasekaran, V [MIT; Gamarmik, D [MIT; Shah, D [MIT; Sin, J [MIT
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
Basis set independent calculation of molecular polarizabilities
Talman, James D.
2012-08-01
It is shown that e-F, where F is the Hartree-Fock (HF) operator, can be inverted, for molecular systems, in numerical Cartesian coordinates. The method was originally applied to finding corrections to approximate Hartree-Fock orbitals [J. D. Talman, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.052518 82, 052518 (2010)]. The approach is applied to determine basis set independent dipole polarizabilities for the water molecule using the Sternheimer method within the uncoupled HF and coupled perturbed HF approximations.
Buttelli Adriana Cristine Koch
2015-09-01
Full Text Available The aim of this study was to compare the effects of single vs. multiple sets water-based resistance training on maximal dynamic strength in young men. Twenty-one physically active young men were randomly allocated into 2 groups: a single set group (SS, n=10 and a multiple sets group (MS, n=11. The single set program consisted of only 1 set of 30 s, whereas the multiple sets comprised 3 sets of 30 s (rest interval between sets equaled 1 min 30 s. All the water-based resistance exercises were performed at maximal effort and both groups trained twice a week for 10 weeks. Upper (bilateral elbow flexors and bilateral elbow extensors, peck deck and inverse peck deck as well as lower-body (bilateral knee flexors and unilateral knee extensors one-repetition maximal tests (1RM were used to assess changes in muscle strength. The training-related effects were assessed using repeated measures two-way ANOVA (α=5%. Both SS and MS groups increased the upper and lower-body 1RM, with no differences between groups. Therefore, these data show that the maximal dynamic strength significantly increases in young men after 10 weeks of training in an aquatic environment, although the improvement in the strength levels is independent of the number of sets performed.
Set theory exploring independence and truth
Schindler, Ralf
2014-01-01
This textbook gives an introduction to axiomatic set theory and examines the prominent questions that are relevant in current research in a manner that is accessible to students. Its main theme is the interplay of large cardinals, inner models, forcing, and descriptive set theory. The following topics are covered: • Forcing and constructability • The Solovay-Shelah Theorem i.e. the equiconsistency of ‘every set of reals is Lebesgue measurable’ with one inaccessible cardinal • Fine structure theory and a modern approach to sharps • Jensen’s Covering Lemma • The equivalence of analytic determinacy with sharps • The theory of extenders and iteration trees • A proof of projective determinacy from Woodin cardinals. Set Theory requires only a basic knowledge of mathematical logic and will be suitable for advanced students and researchers.
The number of independent sets in unicyclic graphs
Pedersen, Anders Sune; Vestergaard, Preben Dahl
In this paper, we determine upper and lower bounds for the number of independent sets in a unicyclic graph in terms of its order. This gives an upper bound for the number of independent sets in a connected graph which contains at least one cycle. We also determine the upper bound for the number...
Solutions of Maximal Compatible Granules and Approximations in Rough Set Models
Chen Wu,
2015-06-01
Full Text Available Abstract—This paper emphasizes studying on a new method to rough set theory by obtaining granules with maximal compatible classes as primitive ones in which any two objects are mutually compatible, proposes the upper and lower approximation computations to extend rough set models for further building multi-granulation rough set theory in incomplete information systems, discusses the properties and relationships of granules and approximations, designs algorithms to solve maximal compatible classes, the lower and upper approximations. It verifies the correctness of algorithms by an example.
On Maximal and Minimal Fuzzy Sets in I-Topological Spaces
Samer Al Ghour
2010-01-01
Full Text Available The notion of maximal fuzzy open sets is introduced. Some basic properties and relationships regarding this notion and other notions of I-topology are given. Moreover, some deep results concerning the known minimal fuzzy open sets concept are given.
Influence maximization in social networks under an independent cascade-based model
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
State-independent contextuality sets for a qutrit
Xu, Zhen-Peng; Chen, Jing-Ling; Su, Hong-Yi
2015-09-01
We present a generalized set of complex rays for a qutrit in terms of parameter q =e i 2 π / k, a k-th root of unity. Remarkably, when k = 2 , 3, the set reduces to two well-known state-independent contextuality (SIC) sets: the Yu-Oh set and the Bengtsson-Blanchfield-Cabello set. Based on the Ramanathan-Horodecki criterion and the violation of a noncontextuality inequality, we have proven that the sets with k = 3 m and k = 4 are SIC sets, while the set with k = 5 is not. Our generalized set of rays will theoretically enrich the study of SIC proofs, and stimulate the novel application to quantum information processing.
Finding Independent Sets in Unions of Perfect Graphs
2010-01-01
The maximum independent set problem (MaxIS) on general graphs is known to be NP-hard to approximate within a factor of $n^{1-epsilon}$, for any $epsilon > 0$. However, there are many ``easy" classes of graphs on which the problem can be solved in polynomial time. In this context, an interesting question is that of computing the maximum independent set in a graph that can be expressed as the union of a small number of graphs from an easy class. The MaxIS problem has been studied on unions of i...
Small sets in convex geometry and formal independence over ZFC
Menachem Kojman
2005-01-01
Full Text Available To each closed subset S of a finite-dimensional Euclidean space corresponds a σ-ideal of sets (S which is σ-generated over S by the convex subsets of S. The set-theoretic properties of this ideal hold geometric information about the set. We discuss the relation of reducibility between convexity ideals and the connections between convexity ideals and other types of ideals, such as the ideals which are generated over squares of Polish space by graphs and inverses of graphs of continuous self-maps, or Ramsey ideals, which are generated over Polish spaces by the homogeneous sets with respect to some continuous pair coloring. We also attempt to present to nonspecialists the set-theoretic methods for dealing with formal independence as a means of geometric investigations.
Gauge origin independence in finite basis sets and perturbation theory
Sørensen, Lasse Kragh; Lindh, Roland; Lundberg, Marcus
2017-09-01
We show that origin independence in finite basis sets for the oscillator strengths is possibly in any gauge contrary to what is stated in literature. This is proved from a discussion of the consequences in perturbation theory when the exact eigenfunctions and eigenvalues to the zeroth order Hamiltonian H0 cannot be found. We demonstrate that the erroneous conclusion for the lack of gauge origin independence in the length gauge stems from not transforming the magnetic terms in the multipole expansion leading to the use of a mixed gauge. Numerical examples of exact origin dependence are shown.
A Comparison of Heuristics with Modularity Maximization Objective using Biological Data Sets
Pirim Harun
2016-01-01
Full Text Available Finding groups of objects exhibiting similar patterns is an important data analytics task. Many disciplines have their own terminologies such as cluster, group, clique, community etc. defining the similar objects in a set. Adopting the term community, many exact and heuristic algorithms are developed to find the communities of interest in available data sets. Here, three heuristic algorithms to find communities are compared using five gene expression data sets. The heuristics have a common objective function of maximizing the modularity that is a quality measure of a partition and a reflection of objects’ relevance in communities. Partitions generated by the heuristics are compared with the real ones using the adjusted rand index, one of the most commonly used external validation measures. The paper discusses the results of the partitions on the mentioned biological data sets.
Luthy, Sarah K; Marinkovic, Aleksandar; Weiner, Daniel J
2011-06-01
High-frequency chest compression (HFCC) is a therapy for cystic fibrosis (CF). We hypothesized that the resonant frequency (f(res)), as measured by impulse oscillometry, could be used to determine what HFCC vest settings produce maximal airflow or volume in pediatric CF patients. In 45 subjects, we studied: f(res), HFCC vest frequencies that subjects used (f(used)), and the HFCC vest frequencies that generated the greatest volume (f(vol)) and airflow (f(flow)) changes as measured by pneumotachometer. Median f(used) for 32 subjects was 14 Hz (range, 6-30). The rank order of the three most common f(used) was 15 Hz (28%) and 12 Hz (21%); three frequencies tied for third: 10, 11, and 14 Hz (5% each). Median f(res) for 43 subjects was 20.30 Hz (range, 7.85-33.65). Nineteen subjects underwent vest-tuning to determine f(vol) and f(flow). Median f(vol) was 8 Hz (range, 6-30). The rank order of the three most common f(vol) was: 8 Hz (42%), 6 Hz (32%), and 10 Hz (21%). Median f(flow) was 26 Hz (range, 8-30). The rank order of the three most common f(flow) was: 30 Hz (26%) and 28 Hz (21%); three frequencies tied for third: 8, 14, and 18 Hz (11% each). There was no correlation between f(used) and f(flow) (r(2) = -0.12) or f(vol) (r(2) = 0.031). There was no correlation between f(res) and f(flow) (r(2) = 0.19) or f(vol) (r(2) = 0.023). Multivariable analysis showed no independent variables were predictive of f(flow) or f(vol). Vest-tuning may be required to optimize clinical utility of HFCC. Multiple HFCC frequencies may need to be used to incorporate f(flow) and f(vol).
Balance between noise and information flow maximizes set complexity of network dynamics.
Tuomo Mäki-Marttunen
Full Text Available Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content.
Wone, B W M; Madsen, Per; Donovan, E R;
2015-01-01
Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selecti...
Network Decomposition and Maximum Independent Set Part Ⅰ: Theoretic Basis
朱松年; 朱嫱
2003-01-01
The structure and characteristics of a connected network are analyzed, and a special kind of sub-network, which can optimize the iteration processes, is discovered. Then, the sufficient and necessary conditions for obtaining the maximum independent set are deduced. It is found that the neighborhood of this sub-network possesses the similar characters, but both can never be allowed incorporated together. Particularly, it is identified that the network can be divided into two parts by a certain style, and then both of them can be transformed into a pair sets network, where the special sub-networks and their neighborhoods appear alternately distributed throughout the entire pair sets network. By use of this characteristic, the network decomposed enough without losing any solutions is obtained. All of these above will be able to make well ready for developing a much better algorithm with polynomial time bound for an odd network in the the application research part of this subject.
San Martín, René; Appelbaum, Lawrence G; Pearson, John M; Huettel, Scott A; Woldorff, Marty G
2013-04-17
Success in many decision-making scenarios depends on the ability to maximize gains and minimize losses. Even if an agent knows which cues lead to gains and which lead to losses, that agent could still make choices yielding suboptimal rewards. Here, by analyzing event-related potentials (ERPs) recorded in humans during a probabilistic gambling task, we show that individuals' behavioral tendencies to maximize gains and to minimize losses are associated with their ERP responses to the receipt of those gains and losses, respectively. We focused our analyses on ERP signals that predict behavioral adjustment: the frontocentral feedback-related negativity (FRN) and two P300 (P3) subcomponents, the frontocentral P3a and the parietal P3b. We found that, across participants, gain maximization was predicted by differences in amplitude of the P3b for suboptimal versus optimal gains (i.e., P3b amplitude difference between the least good and the best gains). Conversely, loss minimization was predicted by differences in the P3b amplitude to suboptimal versus optimal losses (i.e., difference between the worst and the least bad losses). Finally, we observed that the P3a and P3b, but not the FRN, predicted behavioral adjustment on subsequent trials, suggesting a specific adaptive mechanism by which prior experience may alter ensuing behavior. These findings indicate that individual differences in gain maximization and loss minimization are linked to individual differences in rapid neural responses to monetary outcomes.
An addition theorem and maximal zero-sum free sets in Z/pZ
Eric, Balandraud
2009-01-01
Using the polynomial method in additive number theory, this article establishes a new addition theorem for the set of subsums of a set satisfying $A\\cap(-A)=\\emptyset$ in $\\mathbb{Z}/p\\mathbb{Z}$: \\[|\\Sigma(A)|\\geqslant\\min{p,1+\\frac{|A|(|A|+1)}{2}}.\\] The proof is similar in nature to Alon, Nathanson and Ruzsa's proof of the Erd\\"os-Heilbronn conjecture (proved initially by Dias da Silva and Hamidoune \\cite{DH}). A key point in the proof of this theorem is the evaluation of some binomial determinants that have been studied in the work of Gessel and Viennot. A generalization to the set of subsums of a sequence is derived, leading to a structural result on zero-sum free sequences. As another application, it is established that for any prime number $p$, a maximal zero-sum free set in $\\mathbb{Z}/p\\mathbb{Z}$ has cardinality the greatest integer $k$ such that \\[\\frac{k(k+1)}{2}
PMCR-Miner: parallel maximal confident association rules miner algorithm for microarray data set.
Zakaria, Wael; Kotb, Yasser; Ghaleb, Fayed F M
2015-01-01
The MCR-Miner algorithm is aimed to mine all maximal high confident association rules form the microarray up/down-expressed genes data set. This paper introduces two new algorithms: IMCR-Miner and PMCR-Miner. The IMCR-Miner algorithm is an extension of the MCR-Miner algorithm with some improvements. These improvements implement a novel way to store the samples of each gene into a list of unsigned integers in order to benefit using the bitwise operations. In addition, the IMCR-Miner algorithm overcomes the drawbacks faced by the MCR-Miner algorithm by setting some restrictions to ignore repeated comparisons. The PMCR-Miner algorithm is a parallel version of the new proposed IMCR-Miner algorithm. The PMCR-Miner algorithm is based on shared-memory systems and task parallelism, where no time is needed in the process of sharing and combining data between processors. The experimental results on real microarray data sets show that the PMCR-Miner algorithm is more efficient and scalable than the counterparts.
David A Milder
Full Text Available The repetitive discharges required to produce a sustained muscle contraction results in activity-dependent hyperpolarization of the motor axons and a reduction in the force-generating capacity of the muscle. We investigated the relationship between these changes in the adductor pollicis muscle and the motor axons of its ulnar nerve supply, and the reproducibility of these changes. Ten subjects performed a 1-min maximal voluntary contraction. Activity-dependent changes in axonal excitability were measured using threshold tracking with electrical stimulation at the wrist; changes in the muscle were assessed as evoked and voluntary electromyography (EMG and isometric force. Separate components of axonal excitability and muscle properties were tested at 5 min intervals after the sustained contraction in 5 separate sessions. The current threshold required to produce the target muscle action potential increased immediately after the contraction by 14.8% (p<0.05, reflecting decreased axonal excitability secondary to hyperpolarization. This was not correlated with the decline in amplitude of muscle force or evoked EMG. A late reversal in threshold current after the initial recovery from hyperpolarization peaked at -5.9% at ∼35 min (p<0.05. This pattern was mirrored by other indices of axonal excitability revealing a previously unreported depolarization of motor axons in the late recovery period. Measures of axonal excitability were relatively stable at rest but less so after sustained activity. The coefficient of variation (CoV for threshold current increase was higher after activity (CoV 0.54, p<0.05 whereas changes in voluntary (CoV 0.12 and evoked twitch (CoV 0.15 force were relatively stable. These results demonstrate that activity-dependent changes in motor axon excitability are unlikely to contribute to concomitant changes in the muscle after sustained activity in healthy people. The variability in axonal excitability after sustained activity
Maximal independent set graph partitions for representations of body-centered cubic lattices
Erleben, Kenny
2009-01-01
corresponding to the leaves of a quad-tree thus has a smaller memory foot-print. The adjacency information in the graph relieves one from going up and down the quad-tree when searching for neighbors. This results in constant time complexities for refinement and coarsening operations....
Linear scaling calculation of maximally localized Wannier functions with atomic basis set.
Xiang, H J; Li, Zhenyu; Liang, W Z; Yang, Jinlong; Hou, J G; Zhu, Qingshi
2006-06-21
We have developed a linear scaling algorithm for calculating maximally localized Wannier functions (MLWFs) using atomic orbital basis. An O(N) ground state calculation is carried out to get the density matrix (DM). Through a projection of the DM onto atomic orbitals and a subsequent O(N) orthogonalization, we obtain initial orthogonal localized orbitals. These orbitals can be maximally localized in linear scaling by simple Jacobi sweeps. Our O(N) method is validated by applying it to water molecule and wurtzite ZnO. The linear scaling behavior of the new method is demonstrated by computing the MLWFs of boron nitride nanotubes.
Maximal translational equivalence classes of musical patterns in point-set representations
Collins, Tom; Meredith, David
2013-01-01
Representing musical notes as points in pitch-time space causes repeated motives and themes to appear as translationally related patterns that often correspond to maximal translatable patterns (MTPs). However, an MTP is also often the union of a salient pattern with one or two temporally isolated...... notes. This has been called the problem of isolated membership. Examining the MTPs in musical works reveals that salient patterns may correspond more often to the intersections of MTPs than to the MTPs themselves. We therefore explore patterns that are maximal with respect to their translational...
Carole Cometti
2011-12-01
Full Text Available The presents study investigated the effects of between-set interventions on neuromuscular function of the knee extensors during six sets of 10 isokinetic (120°·s-1 maximal concentric contractions separated by three minutes. Twelve healthy men (age: 23.9 ± 2.4 yrs were tested for four different between-set recovery conditions applied during two minutes: passive recovery, active recovery (cycling, electromyostimulation and stretching, in a randomized, crossover design. Before, during and at the end of the isokinetic session, torque and thigh muscles electromyographic activity were measured during maximal voluntary contractions and electrically-evoked doublets. Activation level was calculated using the twitch interpolation technique. While quadriceps electromyographic activity and activation level were significantly decreased at the end of the isokinetic session (-5.5 ± 14.2 % and -2.7 ± 4.8 %; p < 0.05, significant decreases in maximal voluntary contractions and doublets were observed after the third set (respectively -0.8 ± 12.1% and -5.9 ± 9.9%; p < 0.05. Whatever the recovery modality applied, torque was back to initial values after each recovery period. The present results showed that fatigue appeared progressively during the isokinetic session with peripheral alterations occurring first followed by central ones. Recovery interventions between sets did not modify fatigue time course as compared with passive recovery. It appears that the interval between sets (3 min was long enough to provide recovery regardless of the interventions
How to Combine Independent Data Sets for the Same Quantity
Fox, Ronald F; Miller, Jack
2010-01-01
This paper describes a recent mathematical method called conflation for consolidating data from independent experiments that are designed to measure the same quantity, such as Planck's constant or the mass of the top quark. Conflation is easy to calculate and visualize, and minimizes the maximum loss in Shannon information in consolidating several independent distributions into a single distribution. In order to benefit the experimentalist with a much more transparent presentation than the previous mathematical treatise, the main basic properties of conflation are derived in the special case of normal (Gaussian) data. Included are examples of applications to real data from measurements of the fundamental physical constants and from measurements in high energy physics, and the conflation operation is generalized to weighted conflation for situations when the underlying experiments are not uniformly reliable.
Decomposing a planar graph into an independent set and a 3-degenerate graph
Thomassen, Carsten
2001-01-01
We prove the conjecture made by O. V. Borodin in 1976 that the vertex set of every planar graph can be decomposed into an independent set and a set inducing a 3-degenerate graph. (C) 2001 Academic Press.......We prove the conjecture made by O. V. Borodin in 1976 that the vertex set of every planar graph can be decomposed into an independent set and a set inducing a 3-degenerate graph. (C) 2001 Academic Press....
Independent sets in asteroidal triple-free graphs
Broersma, Haitze J.; Kloks, Ton; Kloks, A.J.J.; Kratsch, Dieter; Müller, Haiko
1997-01-01
An asteroidal triple is a set of three vertices such that there is a path between any pair of them avoiding the closed neighborhood of the third. A graph is called AT-free if it does not have an asteroidal triple. We show that there is an O(n 2 · (¯m+1)) time algorithm to compute the maximum
Set back thermostat and controller with independent multizone control
1985-11-01
A multi-zone setback thermostat and controller has been developed. It is based on a RCA1802 microprocessor and is designed to optimize a variable air volume air conditioning system for comfort and convenience of control, while achieving energy savings. The unit reads temperatures from up to 16 zones and controls temperatures by adjusting dampers. Each zone can be independently scheduled for four different setpoints for each day. Preset values are maintained in each zone that has no user setpoints entered, enabling the unit to control the air handling system immediately on power up. An alphanumeric display enables the user to view data in memory and as it is keyed in; unauthorized use can be prevented by entering appropriate codes. This report includes detailed descriptions of the thermostat and components, including circuit diagrams, a program listing, software description, and a user manual. A market survey was conducted in Calgary, Alberta to determine the target market and a suitable marketing strategy. Results indicate a place for this product in building retrofit situations; existing pricing that was obtained shows the unit could have an installed price advantage of about $500 per zone. 7 refs., 31 figs.
Angell, Rico
2016-01-01
We consider the problem of maximizing the spread of influence in a social network by choosing a fixed number of initial seeds --- a central problem in the study of network cascades. The majority of existing work on this problem, formally referred to as the influence maximization problem, is designed for submodular cascades. Despite the empirical evidence that many cascades are non-submodular, little work has been done focusing on non-submodular influence maximization. We propose a new heuristic for solving the influence maximization problem and show via simulations on real-world and synthetic networks that our algorithm outputs more influential seed sets than the state-of-the-art greedy algorithm in many natural cases, with average improvements of 7% for submodular cascades, and 55% for non-submodular cascades. Our heuristic uses a dynamic programming approach on a hierarchical decomposition of the social network to leverage the relation between the spread of cascades and the community structure of social net...
Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.
Nath, Abhigyan; Subbiah, Karthikeyan
2015-12-01
Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance
Boyen, P.; Neven, F.; Valentim, F.L.; Dijk, van A.D.J.
2013-01-01
Correlated motif covering (CMC) is the problem of finding a set of motif pairs, i.e., pairs of patterns, in the sequences of proteins from a protein-protein interaction network (PPI-network) that describe the interactions in the network as concisely as possible. In other words, a perfect solution fo
Maximizing the Lifetime of Wireless Sensor Networks Using Multiple Sets of Rendezvous
Bo Li
2015-01-01
Full Text Available In wireless sensor networks (WSNs, there is a “crowded center effect” where the energy of nodes located near a data sink drains much faster than other nodes resulting in a short network lifetime. To mitigate the “crowded center effect,” rendezvous points (RPs are used to gather data from other nodes. In order to prolong the lifetime of WSN further, we propose using multiple sets of RPs in turn to average the energy consumption of the RPs. The problem is how to select the multiple sets of RPs and how long to use each set of RPs. An optimal algorithm and a heuristic algorithm are proposed to address this problem. The optimal algorithm is highly complex and only suitable for small scale WSN. The performance of the proposed algorithms is evaluated through simulations. The simulation results indicate that the heuristic algorithm approaches the optimal one and that using multiple RP sets can significantly prolong network lifetime.
Bounds on the number of vertex independent sets in a graph
Vestergaard, Preben D.; Pedersen, Anders Sune
2006-01-01
We consider the number of vertex independent sets i(G)In general, the problem of determining the value of i(G) is NP-complete. We present several upper and lower bounds for i(G) in terms of order, size or independence number. We obtain improved bounds for i(G) on restricted graph classes...
Grignon, Jessica S; Ledikwe, Jenny H; Makati, Ditsapelo; Nyangah, Robert; Sento, Baraedi W; Semo, Bazghina-Werq
2014-01-01
To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs.
Takahashi, Jun; Takabe, Satoshi; Hukushima, Koji
2017-07-01
A recently proposed exact algorithm for the maximum independent set problem is analyzed. The typical running time is improved exponentially in some parameter regions compared to simple binary search. Furthermore, the algorithm overcomes the core transition point, where the conventional leaf removal algorithm fails, and works up to the replica symmetry breaking (RSB) transition point. This suggests that a leaf removal core itself is not enough for typical hardness in the random maximum independent set problem, providing further evidence for RSB being the obstacle for algorithms in general.
Grignon JS
2014-05-01
Full Text Available Jessica S Grignon,1,2 Jenny H Ledikwe,1,2 Ditsapelo Makati,2 Robert Nyangah,2 Baraedi W Sento,2 Bazghina-werq Semo1,2 1Department of Global Health, University of Washington, Seattle, WA, USA; 2International Training and Education Center for Health, Gaborone, Botswana Abstract: To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs. Keywords: human resources, health policy, health worker, HIV/AIDS, PEPFAR
An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets
Nielsen, Michael Bang; Museth, Ken
2004-01-01
Level sets have recently proven successful in many areas of computer graphics including water simulations and geometric modeling. However, current implementations of these level set methods are limited by factors such as computational efficiency, storage requirements and the restriction to a doma...... difference schemes typically used to numerically solve the level set equation on fixed uniform grids. ......Level sets have recently proven successful in many areas of computer graphics including water simulations and geometric modeling. However, current implementations of these level set methods are limited by factors such as computational efficiency, storage requirements and the restriction to a domain...... enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...
Counting independent sets of a fixed size in graphs with a given minimum degree
Engbers, John
2012-01-01
Galvin showed that for all fixed $\\delta$ and sufficiently large $n$, the $n$-vertex graph with minimum degree $\\delta$ that admits the most independent sets is the complete bipartite graph $K_{\\delta,n-\\delta}$. He conjectured that except perhaps for some small values of $t$, the same graph yields the maximum count of independent sets of size $t$ for each possible $t$. Evidence for this conjecture was recently provided by Alexander, Cutler, and Mink, who showed that for all triples $(n,\\delta, t)$ with $t\\geq 3$, no $n$-vertex {\\em bipartite} graph with minimum degree $\\delta$ admits more independent sets of size $t$ than $K_{\\delta,n-\\delta}$. Here we make further progress. We show that for all triples $(n,\\delta,t)$ with $\\delta \\leq 3$ and $t\\geq 3$, no $n$-vertex graph with minimum degree $\\delta$ admits more independent sets of size $t$ than $K_{\\delta,n-\\delta}$, and we obtain the same conclusion for $\\delta > 3$ and $t \\geq 2\\delta +1$. Our proofs lead us naturally to the study of an interesting famil...
Merrifield-simmons index and minimum number of independent sets in short trees
Frendrup, Allan; Pedersen, Anders Sune; Sapozhenko, Alexander A.;
2013-01-01
In Ars Comb. 84 (2007), 85-96, Pedersen and Vestergaard posed the problem of determining a lower bound for the number of independent sets in a tree of fixed order and diameter d. Asymptotically, we give here a complete solution for trees of diameter d...
Reliable pre-eclampsia pathways based on multiple independent microarray data sets.
Kawasaki, Kaoru; Kondoh, Eiji; Chigusa, Yoshitsugu; Ujita, Mari; Murakami, Ryusuke; Mogami, Haruta; Brown, J B; Okuno, Yasushi; Konishi, Ikuo
2015-02-01
Pre-eclampsia is a multifactorial disorder characterized by heterogeneous clinical manifestations. Gene expression profiling of preeclamptic placenta have provided different and even opposite results, partly due to data compromised by various experimental artefacts. Here we aimed to identify reliable pre-eclampsia-specific pathways using multiple independent microarray data sets. Gene expression data of control and preeclamptic placentas were obtained from Gene Expression Omnibus. Single-sample gene-set enrichment analysis was performed to generate gene-set activation scores of 9707 pathways obtained from the Molecular Signatures Database. Candidate pathways were identified by t-test-based screening using data sets, GSE10588, GSE14722 and GSE25906. Additionally, recursive feature elimination was applied to arrive at a further reduced set of pathways. To assess the validity of the pre-eclampsia pathways, a statistically-validated protocol was executed using five data sets including two independent other validation data sets, GSE30186, GSE44711. Quantitative real-time PCR was performed for genes in a panel of potential pre-eclampsia pathways using placentas of 20 women with normal or severe preeclamptic singleton pregnancies (n = 10, respectively). A panel of ten pathways were found to discriminate women with pre-eclampsia from controls with high accuracy. Among these were pathways not previously associated with pre-eclampsia, such as the GABA receptor pathway, as well as pathways that have already been linked to pre-eclampsia, such as the glutathione and CDKN1C pathways. mRNA expression of GABRA3 (GABA receptor pathway), GCLC and GCLM (glutathione metabolic pathway), and CDKN1C was significantly reduced in the preeclamptic placentas. In conclusion, ten accurate and reliable pre-eclampsia pathways were identified based on multiple independent microarray data sets. A pathway-based classification may be a worthwhile approach to elucidate the pathogenesis of pre-eclampsia.
Tesch, Carmen M; de Vivie-Riedle, Regina
2004-12-22
The phase of quantum gates is one key issue for the implementation of quantum algorithms. In this paper we first investigate the phase evolution of global molecular quantum gates, which are realized by optimally shaped femtosecond laser pulses. The specific laser fields are calculated using the multitarget optimal control algorithm, our modification of the optimal control theory relevant for application in quantum computing. As qubit system we use vibrational modes of polyatomic molecules, here the two IR-active modes of acetylene. Exemplarily, we present our results for a Pi gate, which shows a strong dependence on the phase, leading to a significant decrease in quantum yield. To correct for this unwanted behavior we include pressure on the quantum phase in our multitarget approach. In addition the accuracy of these phase corrected global quantum gates is enhanced. Furthermore we could show that in our molecular approach phase corrected quantum gates and basis set independence are directly linked. Basis set independence is also another property highly required for the performance of quantum algorithms. By realizing the Deutsch-Jozsa algorithm in our two qubit molecular model system, we demonstrate the good performance of our phase corrected and basis set independent quantum gates.
Meta-analysis of pathway enrichment: combining independent and dependent omics data sets.
Alexander Kaever
Full Text Available A major challenge in current systems biology is the combination and integrative analysis of large data sets obtained from different high-throughput omics platforms, such as mass spectrometry based Metabolomics and Proteomics or DNA microarray or RNA-seq-based Transcriptomics. Especially in the case of non-targeted Metabolomics experiments, where it is often impossible to unambiguously map ion features from mass spectrometry analysis to metabolites, the integration of more reliable omics technologies is highly desirable. A popular method for the knowledge-based interpretation of single data sets is the (Gene Set Enrichment Analysis. In order to combine the results from different analyses, we introduce a methodical framework for the meta-analysis of p-values obtained from Pathway Enrichment Analysis (Set Enrichment Analysis based on pathways of multiple dependent or independent data sets from different omics platforms. For dependent data sets, e.g. obtained from the same biological samples, the framework utilizes a covariance estimation procedure based on the nonsignificant pathways in single data set enrichment analysis. The framework is evaluated and applied in the joint analysis of Metabolomics mass spectrometry and Transcriptomics DNA microarray data in the context of plant wounding. In extensive studies of simulated data set dependence, the introduced correlation could be fully reconstructed by means of the covariance estimation based on pathway enrichment. By restricting the range of p-values of pathways considered in the estimation, the overestimation of correlation, which is introduced by the significant pathways, could be reduced. When applying the proposed methods to the real data sets, the meta-analysis was shown not only to be a powerful tool to investigate the correlation between different data sets and summarize the results of multiple analyses but also to distinguish experiment-specific key pathways.
Meta-analysis of pathway enrichment: combining independent and dependent omics data sets.
Kaever, Alexander; Landesfeind, Manuel; Feussner, Kirstin; Morgenstern, Burkhard; Feussner, Ivo; Meinicke, Peter
2014-01-01
A major challenge in current systems biology is the combination and integrative analysis of large data sets obtained from different high-throughput omics platforms, such as mass spectrometry based Metabolomics and Proteomics or DNA microarray or RNA-seq-based Transcriptomics. Especially in the case of non-targeted Metabolomics experiments, where it is often impossible to unambiguously map ion features from mass spectrometry analysis to metabolites, the integration of more reliable omics technologies is highly desirable. A popular method for the knowledge-based interpretation of single data sets is the (Gene) Set Enrichment Analysis. In order to combine the results from different analyses, we introduce a methodical framework for the meta-analysis of p-values obtained from Pathway Enrichment Analysis (Set Enrichment Analysis based on pathways) of multiple dependent or independent data sets from different omics platforms. For dependent data sets, e.g. obtained from the same biological samples, the framework utilizes a covariance estimation procedure based on the nonsignificant pathways in single data set enrichment analysis. The framework is evaluated and applied in the joint analysis of Metabolomics mass spectrometry and Transcriptomics DNA microarray data in the context of plant wounding. In extensive studies of simulated data set dependence, the introduced correlation could be fully reconstructed by means of the covariance estimation based on pathway enrichment. By restricting the range of p-values of pathways considered in the estimation, the overestimation of correlation, which is introduced by the significant pathways, could be reduced. When applying the proposed methods to the real data sets, the meta-analysis was shown not only to be a powerful tool to investigate the correlation between different data sets and summarize the results of multiple analyses but also to distinguish experiment-specific key pathways.
Decomposing a planar graph of girth 5 into an independent set and a forest
Kawarabayashi, Ken-ichi; Thomassen, Carsten
2009-01-01
We use a list-color technique to extend the result of Borodin and Glebov that the vertex set of every planar graph of girth at least 5 can be partitioned into an independent set and a set which induces a forest. We apply this extension to also extend Grötzsch's theorem that every planar triangle......-free graph is 3-colorable. Let G be a plane graph. Assume that the distance between any two triangles is at least 4. Assume also that each triangle contains a vertex such that this vertex is on the outer face boundary and is not contained in any 4-cycle. Then G has chromatic number at most 3. Note that...
NEIGHBORHOOD UNION OF INDEPENDENT SETS AND HAMILTONICITY OF CLAW-FREE GRAPHS
XuXinping
2005-01-01
Let G be a graph,for any u∈V(G),let N(u) denote the neighborhood of u and d(u)=|N(u)| be the degree of u. For any U V(G) ,let N(U)=Uu,∈UN(u), and d(U)=|N(U)|.A graph G is called claw-free if it has no induced subgraph isomorphic to K1.3. One of the fundamental results concerning cycles in claw-free graphs is due to Tian Feng,et al. : Let G be a 2-connected claw-free graph of order n,and d(u)+d(v)+d(w)≥n-2 for every independent vertex set {u,v,w} of G, then G is Hamiltonian. It is proved that, for any three positive integers s ,t and w,such that if G is a (s+t+w-1)connected claw-free graph of order n,and d(S)+d(T)+d(W)＞n-(s+t+w) for every three disjoint independent vertex sets S,T,W with |S |=s, |T|=t, |W|=w,and S∪T∪W is also independent ,then G is Hamiltonian. Other related results are obtained too.
Parallel group independent component analysis for massive fMRI data sets
Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H.; Pekar, James J.; Lindquist, Martin A.; Eloyan, Ani; Caffo, Brian S.
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively. PMID:28278208
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.
An upper bound on the number of independent sets in a tree
Vestergaard, Preben Dahl; Pedersen, Anders Sune
The main result of this paper is an upper bound on the number of independent sets in a tree in terms of the order and diameter of the tree. This new upper bound is a refinement of the bound given by Prodinger and Tichy [Fibonacci Q., 20 (1982), no. 1, 16-21]. Finally, we give a sufficient condition...... for the new upper bound to be better than the upper bound given by Brigham, Chandrasekharan and Dutton [Fibonacci Q., 31 (1993), no. 2, 98-104]....
An upper bound on the number of independent sets in a tree
Vestergaard, Preben D.; Pedersen, Anders Sune
2007-01-01
The main result of this paper is an upper bound on the number of independent sets in a tree in terms of the order and diameter of the tree. This new upper bound is a refinement of the bound given by Prodinger and Tichy [ Fibonacci Q., 20 (1982), no. 1, 16-21]. Finally, we give a sufficient...... condition for the new upper bound to be better thatn the upper bound given by Brigham, Chandrasekharan and Dutton [ Fibonacci Q., 31 (1993), no. 2, 98-104]....
InfVis--platform-independent visual data mining of multidimensional chemical data sets.
Oellien, Frank; Ihlenfeldt, Wolf-Dietrich; Gasteiger, Johann
2005-01-01
The tremendous increase of chemical data sets, both in size and number, and the simultaneous desire to speed up the drug discovery process has resulted in an increasing need for a new generation of computational tools that assist in the extraction of information from data and allow for rapid and in-depth data mining. During recent years, visual data mining has become an important tool within the life sciences and drug discovery area with the potential to help avoiding data analysis from turning into a bottleneck. In this paper, we present InfVis, a platform-independent visual data mining tool for chemists, who usually only have little experience with classical data mining tools, for the visualization, exploration, and analysis of multivariate data sets. InfVis represents multidimensional data sets by using intuitive 3D glyph information visualization techniques. Interactive and dynamic tools such as dynamic query devices allow real-time, interactive data set manipulations and support the user in the identification of relationships and patterns. InfVis has been implemented in Java and Java3D and can be run on a broad range of platforms and operating systems. It can also be embedded as an applet in Web-based interfaces. We will present in this paper examples detailing the analysis of a reaction database that demonstrate how InfVis assists chemists in identifying and extracting hidden information.
Improved Mixing Condition on the Grid for Counting and Sampling Independent Sets
Restrepo, Ricardo; Tetali, Prasad
2011-01-01
We study the hard-core model defined on independent sets, where each independent set I in a graph G is weighted proportionally to $\\lambda^{|I|}$, for a positive real parameter $\\lambda$. For large $\\lambda$, computing the partition function (namely, the normalizing constant which makes the weighting a probability distribution on a finite graph) on graphs of maximum degree $D\\ge 3$, is a well known computationally challenging problem. More concretely, let $\\lambda_c(T_D)$ denote the critical value for the so-called uniqueness threshold of the hard-core model on the infinite D-regular tree; recent breakthrough results of Dror Weitz (2006) and Allan Sly (2010) have identified $\\lambda_c(T_D)$ as a threshold where the hardness of estimating the above partition function undergoes a computational transition. We focus on the well-studied particular case of the square lattice $\\integers^2$, and provide a new lower bound for the uniqueness threshold, in particular taking it well above $\\lambda_c(T_4)$. Our technique ...
1983-05-15
which no two modules test each other, and the number of faulty modules is smalL In this peper , we show that the implied faulty sets of one-step v...test outcoies Le., an outcome aij for emh (i J) in TID is called a on*~. The diagnosis problem cosists in partitioning S nto th set Gs of non faulty
Abutarboush, Hattan
2012-08-01
This paper presents the design of a low-profile compact printed antenna for fixed frequency and reconfigurable frequency bands. The antenna consists of a main patch, four sub-patches, and a ground plane to generate five frequency bands, at 0.92, 1.73, 1.98, 2.4, and 2.9 GHz, for different wireless systems. For the fixed-frequency design, the five individual frequency bands can be adjusted and set independently over the wide ranges of 18.78%, 22.75%, 4.51%, 11%, and 8.21%, respectively, using just one parameter of the antenna. By putting a varactor (diode) at each of the sub-patch inputs, four of the frequency bands can be controlled independently over wide ranges and the antenna has a reconfigurable design. The tunability ranges for the four bands of 0.92, 1.73, 1.98, and 2.9 GHz are 23.5%, 10.30%, 13.5%, and 3%, respectively. The fixed and reconfigurable designs are studied using computer simulation. For verification of simulation results, the two designs are fabricated and the prototypes are measured. The results show a good agreement between simulated and measured results. © 1963-2012 IEEE.
Helen Lunt
Full Text Available BACKGROUND: In research clinic settings, overweight adults undertaking HIIT (high intensity interval training improve their fitness as effectively as those undertaking conventional walking programs but can do so within a shorter time spent exercising. We undertook a randomized controlled feasibility (pilot study aimed at extending HIIT into a real world setting by recruiting overweight/obese, inactive adults into a group based activity program, held in a community park. METHODS: Participants were allocated into one of three groups. The two interventions, aerobic interval training and maximal volitional interval training, were compared with an active control group undertaking walking based exercise. Supervised group sessions (36 per intervention were held outdoors. Cardiorespiratory fitness was measured using VO2max (maximal oxygen uptake, results expressed in ml/min/kg, before and after the 12 week interventions. RESULTS: On ITT (intention to treat analyses, baseline (N = 49 and exit (N = 39 [Formula: see text]O2 was 25.3±4.5 and 25.3±3.9, respectively. Participant allocation and baseline/exit VO2max by group was as follows: Aerobic interval training N = 16, 24.2±4.8/25.6±4.8; maximal volitional interval training N = 16, 25.0±2.8/25.2±3.4; walking N = 17, 26.5±5.3/25.2±3.6. The post intervention change in VO2max was +1.01 in the aerobic interval training, -0.06 in the maximal volitional interval training and -1.03 in the walking subgroups. The aerobic interval training subgroup increased VO2max compared to walking (p = 0.03. The actual (observed, rather than prescribed time spent exercising (minutes per week, ITT analysis was 74 for aerobic interval training, 45 for maximal volitional interval training and 116 for walking (p = 0.001. On descriptive analysis, the walking subgroup had the fewest adverse events. CONCLUSIONS: In contrast to earlier studies, the improvement in cardiorespiratory fitness in a
Hume, Kara; Plavnick, Joshua; Odom, Samuel L.
2012-01-01
Strategies that promote the independent demonstration of skills across educational settings are critical for improving the accessibility of general education settings for students with ASD. This research assessed the impact of an individual work system on the accuracy of task completion and level of adult prompting across educational setting.…
Phase Transition for Glauber Dynamics for Independent Sets on Regular Trees
Restrepo, Ricardo; Vera, Juan C; Vigoda, Eric; Yang, Linji
2010-01-01
We study the effect of boundary conditions on the relaxation time of the Glauber dynamics for the hard-core model on the tree. The hard-core model is defined on the set of independent sets weighted by a parameter $\\lambda$, called the activity. The Glauber dynamics is the Markov chain that updates a randomly chosen vertex in each step. On the infinite tree with branching factor $b$, the hard-core model can be equivalently defined as a broadcasting process with a parameter $\\omega$ which is the positive solution to $\\lambda=\\omega(1+\\omega)^b$, and vertices are occupied with probability $\\omega/(1+\\omega)$ when their parent is unoccupied. This broadcasting process undergoes a phase transition between the so-called reconstruction and non-reconstruction regions at $\\omega_r\\approx \\ln{b}/b$. Reconstruction has been of considerable interest recently since it appears to be intimately connected to the efficiency of local algorithms on locally tree-like graphs, such as sparse random graphs. In this paper we show tha...
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N; Kazerooni, Ella A
2009-08-01
The authors are developing a computer-aided detection system for pulmonary emboli (PE) in computed tomographic pulmonary angiography (CTPA) scans. The pulmonary vessel tree is extracted using a 3D expectation-maximization segmentation method based on the analysis of eigen-values of Hessian matrices at multiple scales. A parallel multiprescreening method is applied to the segmented vessels to identify volume of interests (VOIs) that contained suspicious PE. A linear discriminant analysis (LDA) classifier with feature selection is designed to reduce false positives (FPs). Features that characterize the contrast, gray level, and size of PE are extracted as input predictor variables to the LDA classifier. With the IRB approval, 59 CTPA PE cases were collected retrospectively from the patient files (UM cases). With access permission, 69 CTPA PE cases were randomly selected from the data set of the prospective investigation of pulmonary embolism diagnosis (PIOPED) II clinical trial. Extensive lung parenchymal or pleural diseases were present in 22/59 UM and 26/69 PIOPED cases. Experienced thoracic radiologists manually marked 595 and 800 PE as the reference standards in the UM and PIOPED data sets, respectively. PE occlusion of arteries ranged from 5% to 100%, with PE located from the main pulmonary artery to the subsegmental artery levels. Of the 595 PE identified in the UM cases, 245 and 350 PE were located in the subsegmental arteries and the more proximal arteries, respectively. The detection performance was assessed by free response ROC (FROC) analysis. The FROC analysis indicated that the PE detection system could achieve an overall sensitivity of 80% at 18.9 FPs/case for the PIOPED cases when the LDA classifier was trained with the UM cases. The test sensitivity with the UM cases was 80% at 22.6 FPs/cases when the LDA classifier was trained with the PIOPED cases. The detection performance depended on the arterial level where the PE was located and on the
Central bank Financial Independence
J.Ramon Martinez-Resano
2004-01-01
Central bank independence is a multifaceted institutional design. The financial component has been seldom analysed. This paper intends to set a comprehensive conceptual background for central bank financial independence. Quite often central banks are modelled as robot like maximizers of some goal. This perspective neglects the fact that central bank functions are inevitably deployed on its balance sheet and have effects on its income statement. A financially independent central bank exhibits ...
Independent validation of the Pain Management Plan in a multi-disciplinary pain team setting
Quinlan, Joanna; Hughes, Richard; Laird, David
2016-01-01
Context/background: The Pain Management Plan (PP) is a brief cognitive behavioural therapy (CBT) self-management programme for people living with persistent pain that can be individually facilitated or provided in a group setting. Evidence of PP efficacy has been reported previously by the pain centres involved in its development. Objectives: To provide a fully independent evaluation of the PP and compare these with the findings reported by Cole et al. Methods: The PP programme was delivered by the County Durham Pain Team (Co. Durham PT) as outlined in training sessions led by Cole et al. Pre- and post-quantitative/patient experience measures were repeated with reliable and clinical significant change determined and compared to the original evaluation. Results: Of the 69 participants who completed the programme, 33% achieved reliable change and 20% clinical significant change using the Pain Self-Efficacy Questionnaire (PSEQ). Across the Brief Pain Inventory (BPI) interference domains between 11% and 22% of participants achieved clinical significant change. There were high levels of positive patient feedback with 25% of participants scoring 100% satisfaction. The mean participant satisfaction across the population was 88%. Conclusion: The results from this evaluation validate those reported by Cole et al. It demonstrates clinically significant improvement in pain and health functioning and high patient appreciation results. Both evaluations emphasise the potential of this programme as an early intervention delivered within a stratified care pain pathway. This approach could optimise the use of finite resources and improve wider access to pain management. PMID:27867506
Brendle, Joerg
2016-01-01
We show that, consistently, there can be maximal subtrees of P (omega) and P (omega) / fin of arbitrary regular uncountable size below the size of the continuum. We also show that there are no maximal subtrees of P (omega) / fin with countable levels. Our results answer several questions of Campero, Cancino, Hrusak, and Miranda.
Testing Bell's Inequality with Cosmic Photons: Closing the Setting-Independence Loophole
Gallicchio, Jason; Friedman, Andrew S.; Kaiser, David I.
2014-03-01
We propose a practical scheme to use photons from causally disconnected cosmic sources to set the detectors in an experimental test of Bell's inequality. In current experiments, with settings determined by quantum random number generators, only a small amount of correlation between detector settings and local hidden variables, established less than a millisecond before each experiment, would suffice to mimic the predictions of quantum mechanics. By setting the detectors using pairs of quasars or patches of the cosmic microwave background, observed violations of Bell's inequality would require any such coordination to have existed for billions of years—an improvement of 20 orders of magnitude.
Brüstle, Thomas; Pérotin, Matthieu
2012-01-01
Maximal green sequences are particular sequences of quiver mutations which were introduced by Keller in the context of quantum dilogarithm identities and independently by Cecotti-Cordova-Vafa in the context of supersymmetric gauge theory. Our aim is to initiate a systematic study of these sequences from a combinatorial point of view. Interpreting maximal green sequences as paths in various natural posets arising in representation theory, we prove the finiteness of the number of maximal green sequences for cluster finite quivers, affine quivers and acyclic quivers with at most three vertices. We also give results concerning the possible numbers and lengths of these maximal green sequences. Finally we describe an algorithm for computing maximal green sequences for arbitrary valued quivers which we used to obtain numerous explicit examples that we present.
Jones, Rebecca
2010-01-01
Two information literacy skills pilot projects are being undertaken at Malvern St James School (MSJ) with Year 6 and Year 9 pupils during 2009-10. The projects encourage the development of independent learning skills, with pupils planning, managing and executing both the research and practical elements of their project. Each pupil sets their own…
Testing Bell's Inequality with Cosmic Photons: Closing the Settings-Independence Loophole
Gallicchio, Jason; Kaiser, David I
2013-01-01
We propose a practical scheme to use photons from causally disconnected cosmic sources to set the detectors in an experimental test of Bell's inequality. In current experiments, detector settings are determined by local quantum random number generators. In such experiments, only a small amount of correlation between detector settings and some local hidden variables, established less than a millisecond before each experimental run, would suffice to mimic the predictions of quantum mechanics. By setting the detectors using cosmic sources instead, observed violations of Bell's inequality in our proposed "Cosmic Bell" experiment would require any such coordination to have been in place for billions of years rather than milliseconds -- an improvement of 20 orders of magnitude. Quasar pairs can be used as real-time triggers to establish detector settings using existing technology. For quasars on opposite sides of the sky with redshifts z > 3.65, there is no event after the hot big bang 13.8 billion years ago (follo...
75 FR 13521 - Centers for Independent Living Program-Training and Technical Assistance
2010-03-22
... of Program: The purpose of the CIL program is to maximize independence, productivity, empowerment... skills they need to leave nursing homes and other institutional settings. Providing technical...
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods.
Pierre Lafaye de Micheaux
2011-10-01
Full Text Available For statistical analysis of functional magnetic resonance imaging (fMRI data sets, we propose a data-driven approach based on independent component analysis (ICA implemented in a new version of the AnalyzeFMRI R package. For fMRI data sets, spatial dimension being much greater than temporal dimension, spatial ICA is the computationally tractable approach generally proposed. However, for some neuroscientific applications, temporal independence of source signals can be assumed and temporal ICA becomes then an attractive exploratory technique. In this work, we use a classical linear algebra result ensuring the tractability of temporal ICA. We report several experiments on synthetic data and real MRI data sets that demonstrate the potential interest of our R package.
HEMI: Hyperedge Majority Influence Maximization
Gangal, Varun; Narayanam, Ramasuri
2016-01-01
In this work, we consider the problem of influence maximization on a hypergraph. We first extend the Independent Cascade (IC) model to hypergraphs, and prove that the traditional influence maximization problem remains submodular. We then present a variant of the influence maximization problem (HEMI) where one seeks to maximize the number of hyperedges, a majority of whose nodes are influenced. We prove that HEMI is non-submodular under the diffusion model proposed.
1981-11-01
Whitney provided a set of axioms for a structure commonly called i a matroid. Matroid theory (see ( Tutte , 19711, [Lawler, 19761) has applications to a wide...applicable in this case. On the other hand, there is no known efficient ( polynomial time) algorithm for constructing cliques of size 2 log n with...intersection. The problem of constructing a maximal independent set in the intersection of k matroids has a polynomial time (in tEl) algorithm [Lawler, 197
K B Athreya
2009-09-01
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy $\\int fh_id_=_i$ for $i=1,2,\\ldots,\\ldots k$ the maximizer of entropy is an $f_0$ that is proportional to $\\exp(\\sum c_i h_i)$ for some choice of $c_i$. An extension of this to a continuum of constraints and many examples are presented.
Meta-Analysis of Pathway Enrichment: Combining Independent and Dependent Omics Data Sets
Kaever, Alexander; Landesfeind, Manuel; Feussner, Kirstin; Morgenstern, Burkhard; Feussner, Ivo; Meinicke, Peter
2014-01-01
A major challenge in current systems biology is the combination and integrative analysis of large data sets obtained from different high-throughput omics platforms, such as mass spectrometry based Metabolomics and Proteomics or DNA microarray or RNA-seq-based Transcriptomics. Especially in the case of non-targeted Metabolomics experiments, where it is often impossible to unambiguously map ion features from mass spectrometry analysis to metabolites, the integration of more reliable omics techn...
Pose-Independent Face Recognition Using Biologically Inspired Feature Set and Mixture of Experts
Reza Azad
2014-08-01
Full Text Available Automatic face recognition system has received significant attention during the last decades due to its wide range of applications, such as security, human-computer interaction, visual surveillance, and so on. In this paper, a new and efficient face recognition method, based on features inspired by the human’s visual cortex and applying mixture of experts’ architecture on the extracted feature set is proposed. A feature set is extracted by means of a feed-forward model, which contains a view and illumination invariant C2 features from all images in the data set. Then, these C2 feature vector which derived from a cortex-like mechanism passed to a mixture of multilayer perceptron neural networks. In the result part, the proposed approach is applied on ORL and Yale databases and the accuracy rate achieved 99.75% and 100% respectively. In addition, experimental results have demonstrated our method robust in successful recognition of human faces even with variant lighting and poses.
A set of ligation-independent in vitro translation vectors for eukaryotic protein production
Endo Yaeta
2008-03-01
Full Text Available Abstract Background The last decade has brought the renaissance of protein studies and accelerated the development of high-throughput methods in all aspects of proteomics. Presently, most protein synthesis systems exploit the capacity of living cells to translate proteins, but their application is limited by several factors. A more flexible alternative protein production method is the cell-free in vitro protein translation. Currently available in vitro translation systems are suitable for high-throughput robotic protein production, fulfilling the requirements of proteomics studies. Wheat germ extract based in vitro translation system is likely the most promising method, since numerous eukaryotic proteins can be cost-efficiently synthesized in their native folded form. Although currently available vectors for wheat embryo in vitro translation systems ensure high productivity, they do not meet the requirements of state-of-the-art proteomics. Target genes have to be inserted using restriction endonucleases and the plasmids do not encode cleavable affinity purification tags. Results We designed four ligation independent cloning (LIC vectors for wheat germ extract based in vitro protein translation. In these constructs, the RNA transcription is driven by T7 or SP6 phage polymerase and two TEV protease cleavable affinity tags can be added to aid protein purification. To evaluate our improved vectors, a plant mitogen activated protein kinase was cloned in all four constructs. Purification of this eukaryotic protein kinase demonstrated that all constructs functioned as intended: insertion of PCR fragment by LIC worked efficiently, affinity purification of translated proteins by GST-Sepharose or MagneHis particles resulted in high purity kinase, and the affinity tags could efficiently be removed under different reaction conditions. Furthermore, high in vitro kinase activity testified of proper folding of the purified protein. Conclusion Four newly
Li, Zipeng; Chen, Jinglong; Zi, Yanyang; Pan, Jun
2017-02-01
As one of most critical component of high-speed locomotive, wheel set bearing fault identification has attracted an increasing attention in recent years. However, non-stationary vibration signal with modulation phenomenon and heavy background noise make it difficult to excavate the hidden weak fault feature. Variational Mode Decomposition (VMD), which can decompose the non-stationary signal into couple Intrinsic Mode Functions adaptively and non-recursively, brings a feasible tool. However, heavy background noise seriously affects setting of mode number, which may lead to information loss or over decomposition problem. In this paper, an independence-oriented VMD method via correlation analysis is proposed to adaptively extract weak and compound fault feature of wheel set bearing. To overcome the information loss problem, the appropriate mode number is determined by the criterion of approximate complete reconstruction. Then the similar modes are combined according to the similarity of their envelopes to solve the over decomposition problem. Finally, three applications to wheel set bearing fault of high speed locomotive verify the effectiveness of the proposed method compared with original VMD, EMD and EEMD methods.
2016-01-01
Reduced cell wall invertase (CWIN) activity has been shown to be associated with poor seed and fruit set under abiotic stress. Here, we examined whether genetically increasing native CWIN activity would sustain fruit set under long-term moderate heat stress (LMHS), an important factor limiting crop production, by using transgenic tomato (Solanum lycopersicum) with its CWIN inhibitor gene silenced and focusing on ovaries and fruits at 2 d before and after pollination, respectively. We found that the increase of CWIN activity suppressed LMHS-induced programmed cell death in fruits. Surprisingly, measurement of the contents of H2O2 and malondialdehyde and the activities of a cohort of antioxidant enzymes revealed that the CWIN-mediated inhibition on programmed cell death is exerted in a reactive oxygen species-independent manner. Elevation of CWIN activity sustained Suc import into fruits and increased activities of hexokinase and fructokinase in the ovaries in response to LMHS. Compared to the wild type, the CWIN-elevated transgenic plants exhibited higher transcript levels of heat shock protein genes Hsp90 and Hsp100 in ovaries and HspII17.6 in fruits under LMHS, which corresponded to a lower transcript level of a negative auxin responsive factor IAA9 but a higher expression of the auxin biosynthesis gene ToFZY6 in fruits at 2 d after pollination. Collectively, the data indicate that CWIN enhances fruit set under LMHS through suppression of programmed cell death in a reactive oxygen species-independent manner that could involve enhanced Suc import and catabolism, HSP expression, and auxin response and biosynthesis. PMID:27462084
Ansari, Elnaz Saberi; Eslahchi, Changiz; Pezeshk, Hamid; Sadeghi, Mehdi
2014-09-01
Decomposition of structural domains is an essential task in classifying protein structures, predicting protein function, and many other proteomics problems. As the number of known protein structures in PDB grows exponentially, the need for accurate automatic domain decomposition methods becomes more essential. In this article, we introduce a bottom-up algorithm for assigning protein domains using a graph theoretical approach. This algorithm is based on a center-based clustering approach. For constructing initial clusters, members of an independent dominating set for the graph representation of a protein are considered as the centers. A distance matrix is then defined for these clusters. To obtain final domains, these clusters are merged using the compactness principle of domains and a method similar to the neighbor-joining algorithm considering some thresholds. The thresholds are computed using a training set consisting of 50 protein chains. The algorithm is implemented using C++ language and is named ProDomAs. To assess the performance of ProDomAs, its results are compared with seven automatic methods, against five publicly available benchmarks. The results show that ProDomAs outperforms other methods applied on the mentioned benchmarks. The performance of ProDomAs is also evaluated against 6342 chains obtained from ASTRAL SCOP 1.71. ProDomAs is freely available at http://www.bioinf.cs.ipm.ir/software/prodomas. © 2014 Wiley Periodicals, Inc.
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N.; Kazerooni, Ella A.
2009-02-01
Computed tomographic pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary embolism (PE). We are developing a computer-aided diagnosis (CAD) system for assisting radiologists in detection of pulmonary embolism in CTPA images. The pulmonary vessel tree is extracted based on the analysis of eigenvalues of Hessian matrices at multiple scales followed by 3D hierarchical EM segmentation. A multiprescreening method is designed to identify suspicious PEs along the extracted vessels. A linear discriminant analysis (LDA) classifier with feature selection is then used to reduce false positives (FPs). Two data sets of 59 and 69 CTPA PE cases were randomly selected from patient files at the University of Michigan (UM) and the PIOPED II study, respectively, and used as independent training and test sets. The PEs that were identified by three experienced thoracic radiologists were used as the gold standard. The detection performance of the CAD system was assessed by free response receiver operating characteristic analysis. The results indicated that our PE detection system can achieve a sensitivity of 80% at 18.9 FPs/case on the PIOPED cases when the LDA classifier was trained with the UM cases. The test sensitivity with the UM cases is 80% at 22.6 FPs/cases when the LDA classifier was trained with the PIOPED cases.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
MAXIMS VIOLATIONS IN LITERARY WORK
Widya Hanum Sari Pertiwi
2015-12-01
Full Text Available This study was qualitative research action that focuses to find out the flouting of Gricean maxims and the functions of the flouting in the tales which are included in collection of children literature entitled My Giant Treasury of Stories and Rhymes. The objective of the study is generally to identify the violation of maxims of quantity, quality, relevance, and manner in the data sources and also to analyze the use of the flouting in the tales which are included in the book. Qualitative design using categorizing strategies, specifically coding strategy, was applied. Thus, the researcher as the instrument in this investigation was selecting the tales, reading them, and gathering every item which reflects the violation of Gricean maxims based on some conditions of flouting maxims. On the basis of the data analysis, it was found that the some utterances in the tales, both narration and conversation, flouting the four maxims of conversation, namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner. The researcher has also found that the flouting of maxims has one basic function that is to encourage the readers’ imagination toward the tales. This one basic function is developed by six others functions: (1 generating specific situation, (2 developing the plot, (3 enlivening the characters’ utterance, (4 implicating message, (5 indirectly characterizing characters, and (6 creating ambiguous setting. Keywords: children literature, tales, flouting maxims
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N.; Kazerooni, Ella A.
2009-01-01
The authors are developing a computer-aided detection system for pulmonary emboli (PE) in computed tomographic pulmonary angiography (CTPA) scans. The pulmonary vessel tree is extracted using a 3D expectation-maximization segmentation method based on the analysis of eigenvalues of Hessian matrices at multiple scales. A parallel multiprescreening method is applied to the segmented vessels to identify volume of interests (VOIs) that contained suspicious PE. A linear discriminant analysis (LDA) ...
Gonzalez-Sanchez, Jon
2010-01-01
Let $w = w(x_1,..., x_n)$ be a word, i.e. an element of the free group $F =$ on $n$ generators $x_1,..., x_n$. The verbal subgroup $w(G)$ of a group $G$ is the subgroup generated by the set $\\{w (g_1,...,g_n)^{\\pm 1} | g_i \\in G, 1\\leq i\\leq n \\}$ of all $w$-values in $G$. We say that a (finite) group $G$ is $w$-maximal if $|G:w(G)|> |H:w(H)|$ for all proper subgroups $H$ of $G$ and that $G$ is hereditarily $w$-maximal if every subgroup of $G$ is $w$-maximal. In this text we study $w$-maximal and hereditarily $w$-maximal (finite) groups.
Constraining Torsion in Maximally symmetric (sub)spaces
Sur, Sourav
2013-01-01
We look into the general aspects of space-time symmetries in presence of torsion, and how the latter is affected by such symmetries. Focusing in particular to space-times which either exhibit maximal symmetry on their own, or could be decomposed to maximally symmetric subspaces, we work out the constraints on torsion in two different theoretical schemes. We show that at least for a completely antisymmetric torsion tensor (for e.g. the one motivated from string theory), an equivalence is set between these two schemes, as the non-vanishing independent torsion tensor components turn out to be the same.
若干图的r-色独立集划分问题%The problem r- color independent sets division of some graph
王蒙; 田双亮
2011-01-01
Research on the problems of tree and single circle graph, And the conclusion that their r- color independent set partition number is given. This method has certain reference value for studying other class figure rcolor independent set partition problems.%研究树图及单圈图的r-色独立集划分问题，并得出了它们的r-色独立集划分个数，其中运用的方法对于研究其他类图的r-色独立集划分问题都会有一定的借鉴价值．
Denny Meyer
2006-12-01
Full Text Available The objective of this paper is to use data from the highest level in men's tennis to assess whether there is any evidence to reject the hypothesis that the two players in a match have a constant probability of winning each set in the match. The data consists of all 4883 matches of grand slam men's singles over a 10 year period from 1995 to 2004. Each match is categorised by its sequence of win (W or loss (L (in set 1, set 2, set 3,... to the eventual winner. Thus, there are several categories of matches from WWW to LLWWW. The methodology involves fitting several probabilistic models to the frequencies of the above ten categories. One four-set category is observed to occur significantly more often than the other two. Correspondingly, a couple of the five-set categories occur more frequently than the others. This pattern is consistent when the data is split into two five-year subsets. The data provides significant statistical evidence that the probability of winning a set within a match varies from set to set. The data supports the conclusion that, at the highest level of men's singles tennis, the better player (not necessarily the winner lifts his play in certain situations at least some of the time
Velastegui, Pamela J.
2013-01-01
This hypothesis-generating case study investigates the naturally emerging roles of technology brokers and technology leaders in three independent schools in New York involving 92 school educators. A multiple and mixed method design utilizing Social Network Analysis (SNA) and fuzzy set Qualitative Comparative Analysis (FSQCA) involved gathering…
Taeroe, Anders; Mustapha, Walid Fayez; Stupak, Inge; Raulund-Rasmussen, Karsten
2017-07-15
Forests' potential to mitigate carbon emissions to the atmosphere is heavily debated and a key question is if forests left unmanaged to store carbon in biomass and soil provide larger carbon emission reductions than forests kept under forest management for production of wood that can substitute fossil fuels and fossil fuel intensive materials. We defined a modelling framework for calculation of the carbon pools and fluxes along the forest energy and wood product supply chains over 200 years for three forest management alternatives (FMA): 1) a traditionally managed European beech forest, as a business-as-usual case, 2) an energy poplar plantation, and 3) a set-aside forest left unmanaged for long-term storage of carbon. We calculated the cumulative net carbon emissions (CCE) and carbon parity times (CPT) of the managed forests relative to the unmanaged forest. Energy poplar generally had the lowest CCE when using coal as the reference fossil fuel. With natural gas as the reference fossil fuel, the CCE of the business-as-usual and the energy poplar was nearly equal, with the unmanaged forest having the highest CCE after 40 years. CPTs ranged from 0 to 156 years, depending on the applied model assumptions. CCE and CPT were especially sensitive to the reference fossil fuel, material alternatives to wood, forest growth rates for the three FMAs, and energy conversion efficiencies. Assumptions about the long-term steady-state levels of carbon stored in the unmanaged forest had a limited effect on CCE after 200 years. Analyses also showed that CPT was not a robust measure for ranking of carbon mitigation benefits. Copyright © 2017 Elsevier Ltd. All rights reserved.
Profit maximization mitigates competition
Dierker, Egbert; Grodal, Birgit
1996-01-01
We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...
Maximally incompatible quantum observables
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean W.; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Wei, Jun; Patel, Smita
2015-03-01
We have developed a computer-aided detection (CAD) system for assisting radiologists in detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images. The CAD system includes stages of pulmonary vessel segmentation, prescreening of PE candidates and false positive (FP) reduction to identify suspicious PEs. The system was trained with 59 CTPA PE cases collected retrospectively from our patient files (UM set) with IRB approval. Five feature groups containing 139 features that characterized the intensity texture, gradient, intensity homogeneity, shape, and topology of PE candidates were initially extracted. Stepwise feature selection guided by simplex optimization was used to select effective features for FP reduction. A linear discriminant analysis (LDA) classifier was formulated to differentiate true PEs from FPs. The purpose of this study is to evaluate the performance of our CAD system using an independent test set of CTPA cases. The test set consists of 50 PE cases from the PIOPED II data set collected by multiple institutions with access permission. A total of 537 PEs were manually marked by experienced thoracic radiologists as reference standard for the test set. The detection performance was evaluated by freeresponse receiver operating characteristic (FROC) analysis. The FP classifier obtained a test Az value of 0.847 and the FROC analysis indicated that the CAD system achieved an overall sensitivity of 80% at 8.6 FPs/case for the PIOPED test set.
Rahman, Md Masudur; Mahdy, Mahdy Rahman Chowdhury; Haque, Md Ehsanul; Islam, Rakibul; Chowdhury, S Tanvir-ur-Rahman; Nieto-Vesperinas, Manuel; Matin, Md Abdul
2015-01-01
Optical pulling with tractor beams is so far highly dependent on (i) the property of embedding background or the particle itself , (ii) the number of the particles and/or (iii) the manual ramping of beam phase. A necessary theoretical solution of these problems is proposed here. This article demonstrates a novel active tractor beam for multiple fully immersed objects with its additional abilities of yielding a controlled rotation and a desired 3D trapping. Continuous and stable long distance levitation, controlled rotation and 3D trapping are demonstrated with a single optical set-up by using two coaxial, or even non-coaxial, superimposed non-diffracting higher order Bessel beams of reverse helical nature and different frequencies. The superimposed beam has periodic intensity variations both along and around the beam-axis because of the difference in longitudinal wave-vectors and beam orders, respectively. The difference in frequencies of two laser beams makes the intensity pattern move along and around the b...
Parker, Andrew M.; Wandi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...
Tight inequalities for qutrit state-independent contextuality
Cabello, Adan; Gühne, Otfried; Kleinmann, Matthias; Larsson, Jan-Ake
2012-01-01
Recently, Yu and Oh have proposed a noncontextuality inequality [Phys. Rev. Lett. 108, 030402 (2012)] which involves the simplest and hence most fundamental scenario for state-independent quantum contextuality. As we show, Yu and Oh's inequality is neither tight (i.e., it does not belong to the minimal set which completely separates contextual and noncontextual correlations) nor optimal (i.e., its quantum violation is not maximal). Moreover, we provide a method for obtaining state-independent noncontextuality inequalities with the maximal violation and, using it, we identify two essentially different state-independent tight inequalities with maximal quantum violation for Yu and Oh's scenario. These inequalities allow for an easier and more significant experimental test of qutrit state-independent quantum contextuality.
Ming Yi WANG; Guo ZHAO
2005-01-01
A right R-module E over a ring R is said to be maximally injective in case for any maximal right ideal m of R, every R-homomorphism f : m → E can be extended to an R-homomorphism f' : R → E. In this paper, we first construct an example to show that maximal injectivity is a proper generalization of injectivity. Then we prove that any right R-module over a left perfect ring R is maximally injective if and only if it is injective. We also give a partial affirmative answer to Faith's conjecture by further investigating the property of maximally injective rings. Finally, we get an approximation to Faith's conjecture, which asserts that every injective right R-module over any left perfect right self-injective ring R is the injective hull of a projective submodule.
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
Knop, R A; Amanullah, R; Astier, Pierre; Blanc, G; Burns, M S; Conley, A; Deustua, S E; Doi, M; Ellis, R; Fabbro, S; Folatelli, G; Fruchter, A S; Garavini, G; Garmond, S; Garton, K; Gibbons, R; Goldhaber, G; Goobar, A; Groom, D E; Hardin, D; Hook, I; Howell, D A; Kim, A G; Lee Byung Cheol; Lidman, C E; Méndez, J; Nobili, S; Nugent, P; Pain, R; Panagia, N; Pennypacker, C R; Perlmutter, S; Quimby, R; Raux, J; Regnault, N; Ruiz-Lapuente, P; Sainton, G; Schaefer, B; Schahmaneche, K; Smith, E; Spadafora, A L; Stanishev, V; Sullivan, M; Walton, N A; Wang, L; Wood-Vasey, W M; Yasuda, N
2003-01-01
We report measurements of $\\Omega_M$, $\\Omega_\\Lambda$, and w from eleven supernovae at z=0.36-0.86 with high-quality lightcurves measured using WFPC-2 on the HST. This is an independent set of high-redshift supernovae that confirms previous supernova evidence for an accelerating Universe. Combined with earlier Supernova Cosmology Project data, the new supernovae yield a flat-universe measurement of the mass density $\\Omega_M=0.25^{+0.07}_{-0.06}$ (statistical) $\\pm0.04$ (identified systematics), or equivalently, a cosmological constant of $\\Omega_\\Lambda=0.75^{+0.06}_{-0.07}$ (statistical) $\\pm0.04$ (identified systematics). When the supernova results are combined with independent flat-universe measurements of $\\Omega_M$ from CMB and galaxy redshift distortion data, they provide a measurement of $w=-1.05^{+0.15}_{-0.20}$ (statistical) $\\pm0.09$ (identified systematic), if w is assumed to be constant in time. The new data offer greatly improved color measurements of the high-redshift supernovae, and hence imp...
Rudiger Bubner
1998-12-01
Full Text Available Even though the maxims' theory is not at thecenter of Kant's ethics, it is the unavoidable basis of the categoric imperative's formulation. Kant leanson the transmitted representations of modem moral theory. During the last decades, the notion of maxims has deserved more attention, due to the philosophy of language's debates on rules, and due to action theory's interest in this notion. I here by brietly expound my views in these discussions.
Simulating (log(c) n)-wise Independence in NC
1989-05-01
is d-uniform if every edge has d elements. Kleitman and Alon, Babai, and Itai [KI, ABI] define the large d-partite subhypergraph problem as follows...13 References [ABI] Alon, N., L. Babai, A. Itai , "A Fast and Simple Randomized Parallel Algorithm for the Maximal Independent Set Problem", Journal
Polarity related influence maximization in signed social networks.
Dong Li
Full Text Available Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.
Polarity related influence maximization in signed social networks.
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.
Elena Caro
Full Text Available In eukaryotic cells, environmental and developmental signals alter chromatin structure and modulate gene expression. Heterochromatin constitutes the transcriptionally inactive state of the genome and in plants and mammals is generally characterized by DNA methylation and histone modifications such as histone H3 lysine 9 (H3K9 methylation. In Arabidopsis thaliana, DNA methylation and H3K9 methylation are usually colocated and set up a mutually self-reinforcing and stable state. Here, in contrast, we found that SUVR5, a plant Su(var3-9 homolog with a SET histone methyltransferase domain, mediates H3K9me2 deposition and regulates gene expression in a DNA methylation-independent manner. SUVR5 binds DNA through its zinc fingers and represses the expression of a subset of stimulus response genes. This represents a novel mechanism for plants to regulate their chromatin and transcriptional state, which may allow for the adaptability and modulation necessary to rapidly respond to extracellular cues.
Maximal Hypersurfaces in Spacetimes with Translational Symmetry
Bulawa, Andrew
2016-01-01
We consider four-dimensional vacuum spacetimes which admit a free isometric spacelike R-action. Taking a quotient with respect to the R-action produces a three-dimensional quotient spacetime. We establish several results regarding maximal hypersurfaces (spacelike hypersurfaces of zero mean curvature) in quotient spacetimes. First, we show that complete noncompact maximal hypersurfaces must either be flat cylinders S^1 x R or conformal to the Euclidean plane. Second, we establish a positive mass theorem for certain maximal hypersurfaces. Finally, while it is meaningful to use a bounded lapse when adopting the maximal hypersurface gauge condition in the four-dimensional (asymptotically flat) setting, it is shown here that nontrivial quotient spacetimes admit the maximal hypersurface gauge only with an unbounded lapse.
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Froeschke, John T.; Stunz, Gregory W.; Sterba-Boatwright, Blair; Wildhaber, Mark L.
2010-01-01
Using a long-term fisheries-independent data set, we tested the 'shark nursery area concept' proposed by Heupel et al. (2007) with the suggested working assumptions that a shark nursery habitat would: (1) have an abundance of immature sharks greater than the mean abundance across all habitats where they occur; (2) be used by sharks repeatedly through time (years); and (3) see immature sharks remaining within the habitat for extended periods of time. We tested this concept using young-of-the-year (age 0) and juvenile (age 1+ yr) bull sharks Carcharhinus leucas from gill-net surveys conducted in Texas bays from 1976 to 2006 to estimate the potential nursery function of 9 coastal bays. Of the 9 bay systems considered as potential nursery habitat, only Matagorda Bay satisfied all 3 criteria for young-of-the-year bull sharks. Both Matagorda and San Antonio Bays met the criteria for juvenile bull sharks. Through these analyses we examined the utility of this approach for characterizing nursery areas and we also describe some practical considerations, such as the influence of the temporal or spatial scales considered when applying the nursery role concept to shark populations.
Maximal right smooth extension chains
Huang, Yun Bao
2010-01-01
If $w=u\\alpha$ for $\\alpha\\in \\Sigma=\\{1,2\\}$ and $u\\in \\Sigma^*$, then $w$ is said to be a \\textit{simple right extension}of $u$ and denoted by $u\\prec w$. Let $k$ be a positive integer and $P^k(\\epsilon)$ denote the set of all $C^\\infty$-words of height $k$. Set $u_{1},\\,u_{2},..., u_{m}\\in P^{k}(\\epsilon)$, if $u_{1}\\prec u_{2}\\prec ...\\prec u_{m}$ and there is no element $v$ of $P^{k}(\\epsilon)$ such that $v\\prec u_{1}\\text{or} u_{m}\\prec v$, then $u_{1}\\prec u_{2}\\prec...\\prec u_{m}$ is said to be a \\textit{maximal right smooth extension (MRSE) chains}of height $k$. In this paper, we show that \\textit{MRSE} chains of height $k$ constitutes a partition of smooth words of height $k$ and give the formula of the number of \\textit{MRSE} chains of height $k$ for each positive integer $k$. Moreover, since there exist the minimal height $h_1$ and maximal height $h_2$ of smooth words of length $n$ for each positive integer $n$, we find that \\textit{MRSE} chains of heights $h_1-1$ and $h_2+1$ are good candidates t...
Vind, Karl
1991-01-01
A simple mathematical result characterizing a subset of a product set is proved and used to obtain additive representations of preferences. The additivity consequences of independence assumptions are obtained for preferences which are not total or transitive. This means that most of the economic...... theory based on additive preferences - expected utility, discounted utility - has been generalized to preferences which are not total or transitive. Other economic applications of the theorem are given...
Janusz Brzozowski
2014-05-01
Full Text Available The atoms of a regular language are non-empty intersections of complemented and uncomplemented quotients of the language. Tight upper bounds on the number of atoms of a language and on the quotient complexities of atoms are known. We introduce a new class of regular languages, called the maximally atomic languages, consisting of all languages meeting these bounds. We prove the following result: If L is a regular language of quotient complexity n and G is the subgroup of permutations in the transition semigroup T of the minimal DFA of L, then L is maximally atomic if and only if G is transitive on k-subsets of 1,...,n for 0 <= k <= n and T contains a transformation of rank n-1.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline with...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Maximizing Complementary Quantities by Projective Measurements
M. Souza, Leonardo A.; Bernardes, Nadja K.; Rossi, Romeu
2017-04-01
In this work, we study the so-called quantitative complementarity quantities. We focus in the following physical situation: two qubits ( q A and q B ) are initially in a maximally entangled state. One of them ( q B ) interacts with a N-qubit system ( R). After the interaction, projective measurements are performed on each of the qubits of R, in a basis that is chosen after independent optimization procedures: maximization of the visibility, the concurrence, and the predictability. For a specific maximization procedure, we study in detail how each of the complementary quantities behave, conditioned on the intensity of the coupling between q B and the N qubits. We show that, if the coupling is sufficiently "strong," independent of the maximization procedure, the concurrence tends to decay quickly. Interestingly enough, the behavior of the concurrence in this model is similar to the entanglement dynamics of a two qubit system subjected to a thermal reservoir, despite that we consider finite N. However, the visibility shows a different behavior: its maximization is more efficient for stronger coupling constants. Moreover, we investigate how the distinguishability, or the information stored in different parts of the system, is distributed for different couplings.
HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL
HR Division
2000-01-01
Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
Brandes, U; Gaertler, M; Goerke, R; Hoefer, M; Nikoloski, Z; Wagner, D
2006-01-01
Several algorithms have been proposed to compute partitions of networks into communities that score high on a graph clustering index called modularity. While publications on these algorithms typically contain experimental evaluations to emphasize the plausibility of results, none of these algorithms has been shown to actually compute optimal partitions. We here settle the unknown complexity status of modularity maximization by showing that the corresponding decision version is NP-complete in the strong sense. As a consequence, any efficient, i.e. polynomial-time, algorithm is only heuristic and yields suboptimal partitions on many instances.
Knop, R. A.; Aldering, G.; Amanullah, R.; Astier, P.; Blanc, G.; Burns, M. S.; Conley, A.; Deustua, S. E.; Doi, M.; Ellis, R.; Fabbro, S.; Folatelli, G.; Fruchter, A. S.; Garavini, G.; Garmond, S.; Garton, K.; Gibbons, R.; Goldhaber, G.; Goobar, A.; Groom, D. E.; Hardin, D.; Hook, I.; Howell, D. A.; Kim, A. G.; Lee, B. C.; Lidman, C.; Mendez, J.; Nobili, S.; Nugent, P. E.; Pain, R.; Panagia, N.; Pennypacker, C. R.; Perlmutter, S.; Quimby, R.; Raux, J.; Regnault, N.; Ruiz-Lapuente, P.; Sainton, G.; Schaefer, B.; Schahmaneche, K.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Sullivan, M.; Walton, N. A.; Wang, L.; Wood-Vasey, W. M.; Yasuda, N.
2003-11-01
We report measurements of ΩM, ΩΛ, and w from 11 supernovae (SNe) at z=0.36-0.86 with high-quality light curves measured using WFPC2 on the Hubble Space Telescope (HST). This is an independent set of high-redshift SNe that confirms previous SN evidence for an accelerating universe. The high-quality light curves available from photometry on WFPC2 make it possible for these 11 SNe alone to provide measurements of the cosmological parameters comparable in statistical weight to the previous results. Combined with earlier Supernova Cosmology Project data, the new SNe yield a measurement of the mass density ΩM=0.25+0.07-0.06(statistical)+/-0.04 (identified systematics), or equivalently, a cosmological constant of ΩΛ=0.75+0.06-0.07(statistical)+/-0.04 (identified systematics), under the assumptions of a flat universe and that the dark energy equation-of-state parameter has a constant value w=-1. When the SN results are combined with independent flat-universe measurements of ΩM from cosmic microwave background and galaxy redshift distortion data, they provide a measurement of w=-1.05+0.15-0.20(statistical)+/-0.09 (identified systematic), if w is assumed to be constant in time. In addition to high-precision light-curve measurements, the new data offer greatly improved color measurements of the high-redshift SNe and hence improved host galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host galaxy extinction correction directly for individual SNe without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with P(ΩΛ>0)>0.99, a result consistent with previous and current SN analyses that rely on the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution. Based in part on
Groups Satisfying the Maximal Condition on Non-modular Subgroups
Maria De Falco; Carmela Musella
2005-01-01
In this paper, (generalized) soluble groups for which the set of non-modular subgroups verifies the maximal condition and groups for which the set of non-permutable subgroups satisfies the same property are classified.
Energy Band Calculations for Maximally Even Superlattices
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
Maximizing without difficulty: A modified maximizing scale and its correlates
Linda Lai
2010-01-01
This article presents several studies that replicate and extend previous research on maximizing. A modified scale for measuring individual maximizing tendency is introduced. The scale has adequate psychometric properties and reflects maximizers' aspirations for high standards and their preference for extensive alternative search, but not the decision difficulty aspect included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cogniti...
Maximizing profit using recommender systems
Das, Aparna; Ricketts, Daniel
2009-01-01
Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.
The maximal D=5 supergravities
de Wit, Bernard; Trigiante, M; Wit, Bernard de; Samtleben, Henning; Trigiante, Mario
2007-01-01
The general Lagrangian for maximal supergravity in five spacetime dimensions is presented with vector potentials in the \\bar{27} and tensor fields in the 27 representation of E_6. This novel tensor-vector system is subject to an intricate set of gauge transformations, describing 3(27-t) massless helicity degrees of freedom for the vector fields and 3t massive spin degrees of freedom for the tensor fields, where the (even) value of t depends on the gauging. The kinetic term of the tensor fields is accompanied by a unique Chern-Simons coupling which involves both vector and tensor fields. The Lagrangians are completely encoded in terms of the embedding tensor which defines the E_6 subgroup that is gauged by the vectors. The embedding tensor is subject to two constraints which ensure the consistency of the combined vector-tensor gauge transformations and the supersymmetry of the full Lagrangian. This new formulation encompasses all possible gaugings.
Quantum theory allows for absolute maximal contextuality
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
On the independence polynomial of an antiregular graph
Levit, Vadim E
2010-01-01
A graph with at most two vertices of the same degree is called antiregular (Merris 2003), maximally nonregular (Zykov 1990) or quasiperfect (Behzad, Chartrand 1967). If s_{k} is the number of independent sets of cardinality k in a graph G, then I(G;x) = s_{0} + s_{1}x + ... + s_{alpha}x^{alpha} is the independence polynomial of G (Gutman, Harary 1983), where alpha = alpha(G) is the size of a maximum independent set. In this paper we derive closed formulae for the independence polynomials of antiregular graphs. In particular, we deduce that every antiregular graph A is uniquely defined by its independence polynomial I(A;x), within the family of threshold graphs. Moreover, I(A;x) is logconcave with at most two real roots, and I(A;-1) belongs to {-1,0}.
The Winning Edge: Maximizing Success in College.
Schmitt, David E.
This book offers college students ideas on how to maximize their success in college by examining the personal management techniques a student needs to succeed. Chapters are as follows: "Getting and Staying Motivated"; "Setting Goals and Tapping Your Resources"; "Conquering Time"; "Think Yourself to College Success"; "Understanding and Remembering…
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...... with 30% (v/v) ethanol or saline, respectively. Relative viscosity was used as one measure of physical properties of the emulsion. Higher degrees of sensitization (but not rates) were obtained at the 48 h challenge reading with the oil/propylene glycol and oil/saline + ethanol emulsions compared...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
The Critical Independence Number and an Independence Decomposition
Larson, Craig Eric
2009-01-01
An independent set $I_c$ is a \\textit{critical independent set} if $|I_c| - |N(I_c)| \\geq |J| - |N(J)|$, for any independent set $J$. The \\textit{critical independence number} of a graph is the cardinality of a maximum critical independent set. This number is a lower bound for the independence number and can be computed in polynomial-time. Any graph can be decomposed into two subgraphs where the independence number of one subgraph equals its critical independence number, where the critical independence number of the other subgraph is zero, and where the sum of the independence numbers of the subgraphs is the independence number of the graph. A proof of a conjecture of Graffiti.pc yields a new characterization of K\\"{o}nig-Egervary graphs: these are exactly the graphs whose independence and critical independence numbers are equal.
Maximal and Minimal Congruences on Some Semigroups
Jintana SANWONG; Boorapa SINGHA; R.P.SULLIVAN
2009-01-01
In 2006,Sanwong and Sullivan described the maximal congruences on the semigroup N consisting of all non-negative integers under standard multiplication,and on the semigroup T(X) consisting of all total transformations of an infinite set X under composition. Here,we determine all maximal congruences on the semigroup Zn under multiplication modulo n. And,when Y X,we do the same for the semigroup T(X,Y) consisting of all elements of T(X) whose range is contained in Y. We also characterise the minimal congruences on T(X,Y).
Unified Maximally Natural Supersymmetry
Huang, Junwu
2016-01-01
Maximally Natural Supersymmetry, an unusual weak-scale supersymmetric extension of the Standard Model based upon the inherently higher-dimensional mechanism of Scherk-Schwarz supersymmetry breaking (SSSB), possesses remarkably good fine tuning given present LHC limits. Here we construct a version with precision $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ unification: $\\sin^2 \\theta_W(M_Z) \\simeq 0.231$ is predicted to $\\pm 2\\%$ by unifying $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ into a 5D $SU(3)_{\\rm EW}$ theory at a Kaluza-Klein scale of $1/R_5 \\sim 4.4\\,{\\rm TeV}$, where SSSB is simultaneously realised. Full unification with $SU(3)_{\\rm C}$ is accommodated by extending the 5D theory to a $N=4$ supersymmetric $SU(6)$ gauge theory on a 6D rectangular orbifold at $1/R_6 \\sim 40 \\,{\\rm TeV}$. TeV-scale states beyond the SM include exotic charged fermions implied by $SU(3)_{\\rm EW}$ with masses lighter than $\\sim 1.2\\,{\\rm TeV}$, and squarks in the mass range $1.4\\,{\\rm TeV} - 2.3\\,{\\rm TeV}$, providing distinct signature...
Trend of maximal inspiratory pressure in mechanically ventilated patients: predictors
Pedro Caruso
2008-01-01
Full Text Available INTRODUCTION: It is known that mechanical ventilation and many of its features may affect the evolution of inspiratory muscle strength during ventilation. However, this evolution has not been described, nor have its predictors been studied. In addition, a probable parallel between inspiratory and limb muscle strength evolution has not been investigated. OBJECTIVE: To describe the variation over time of maximal inspiratory pressure during mechanical ventilation and its predictors. We also studied the possible relationship between the evolution of maximal inspiratory pressure and limb muscle strength. METHODS: A prospective observational study was performed in consecutive patients submitted to mechanical ventilation for > 72 hours. The maximal inspiratory pressure trend was evaluated by the linear regression of the daily maximal inspiratory pressure and a logistic regression analysis was used to look for independent maximal inspiratory pressure trend predictors. Limb muscle strength was evaluated using the Medical Research Council score. RESULTS: One hundred and sixteen patients were studied, forty-four of whom (37.9% presented a decrease in maximal inspiratory pressure over time. The members of the group in which maximal inspiratory pressure decreased underwent deeper sedation, spent less time in pressure support ventilation and were extubated less frequently. The only independent predictor of the maximal inspiratory pressure trend was the level of sedation (OR=1.55, 95% CI 1.003 - 2.408; p = 0.049. There was no relationship between the maximal inspiratory pressure trend and limb muscle strength. CONCLUSIONS: Around forty percent of the mechanically ventilated patients had a decreased maximal inspiratory pressure during mechanical ventilation, which was independently associated with deeper levels of sedation. There was no relationship between the evolution of maximal inspiratory pressure and the muscular strength of the limb.
Weldearegay, Dawit Fisseha; Yan, F.; Jiang, D.
2012-01-01
irrigation until all of the transpirable soil water had been depleted in the pots. Results showed that, particularly under D treatment, Alora depleted soil water faster than Trappe. In both varieties, flag leaf relative water content (RWC) was significantly lowered, while spikelet abscisic acid (ABA...... decreased shoot biomass and reduced seed set. When analysed across the varieties and the treatments, it was found that the reduction in seed set was closely correlated with the increase in spikelet ABA concentration, indicating that D and HD treatments induced greater spikelet ABA concentrations might have...... caused seed abortion. It was concluded that the grain yield reduction under D and HD treatments during anthesis in spring wheat is ascribed mainly to a lowered seed set and wheat varieties (i.e. Alora) with more dramatic increase in spikelet ABA concentration are more susceptible to D and HD treatment....
Maximal subgroups of finite groups
S. Srinivasan
1990-01-01
Full Text Available In finite groups maximal subgroups play a very important role. Results in the literature show that if the maximal subgroup has a very small index in the whole group then it influences the structure of the group itself. In this paper we study the case when the index of the maximal subgroups of the groups have a special type of relation with the Fitting subgroup of the group.
Goldenberg, Shira M; Chettiar, Jill; Simo, Annick; Silverman, Jay G; Strathdee, Steffanie A; Montaner, Julio S G; Shannon, Kate
2014-01-01
To explore factors associated with early sex work initiation and model the independent effect of early initiation on HIV infection and prostitution arrests among adult sex workers (SWs). Baseline data (2010-2011) were drawn from a cohort of SWs who exchanged sex for money within the last month and were recruited through time location sampling in Vancouver, Canada. Analyses were restricted to adults ≥18 years old. SWs completed a questionnaire and HIV/sexually transmitted infection testing. Using multivariate logistic regression, we identified associations with early sex work initiation (prostitution arrests among adult SWs. Of 508 SWs, 193 (38.0%) reported early sex work initiation, with 78.53% primarily street-involved SWs and 21.46% off-street SWs. HIV prevalence was 11.22%, which was 19.69% among early initiates. Early initiates were more likely to be Canadian born [adjusted odds ratio (AOR): 6.8, 95% confidence interval (CI): 2.42 to 19.02], inject drugs (AOR: 1.6, 95% CI: 1.0 to 2.5), and to have worked for a manager (AOR: 2.22, 95% CI: 1.3 to 3.6) or been coerced into sex work (AOR: 2.3, 95% CI: 1.14 to 4.44). Early initiation retained an independent effect on increased risk of HIV infection (AOR: 2.5, 95% CI: 1.3 to 3.2) and prostitution arrests (AOR: 2.0, 95% CI: 1.3 to 3.2). Adolescent sex work initiation is concentrated among marginalized, drug, and street-involved SWs. Early initiation holds an independent increased effect on HIV infection and criminalization of adult SWs. Findings suggest the need for evidence-based approaches to reduce harm among adult and youth SWs.
Finding Maximal Quasiperiodicities in Strings
Brodal, Gerth Stølting; Pedersen, Christian N. S.
2000-01-01
of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...
Maximizing Entropy over Markov Processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Maximizing entropy over Markov processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Influence Maximization in Social Networks: Towards an Optimal Algorithmic Solution
Borgs, Christian; Chayes, Jennifer; Lucier, Brendan
2012-01-01
Diffusion is a fundamental graph process, underpinning such phenomena as epidemic disease contagion and the spread of innovation by word-of-mouth. We address the algorithmic problem of finding a set of k initial seed nodes in a network so that the expected size of the resulting cascade is maximized, under the standard independent cascade model of network diffusion. Our main result is an algorithm for the influence maximization problem that obtains the near-optimal approximation factor of (1 - 1/e - epsilon), for any epsilon > 0, in time O((m+n)log(n) / epsilon^3) where n and m are the number of vertices and edges in the network. Our algorithm is nearly runtime-optimal (up to a logarithmic factor) as we establish a lower bound of Omega(m+n) on the runtime required to obtain a constant approximation. Our method also allows a provable tradeoff between solution quality and runtime: we obtain an O(1/beta)-approximation in time O(n log^3(n) * a(G) / beta) for any beta > 1, where a(G) denotes the arboricity of the d...
Maximal switchability of centralized networks
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
Lythgow, Kieren T.; Hudson, Gavin; Andras, Peter; Chinnery, Patrick F.
2011-01-01
In the absence of a comprehensive experimentally derived mitochondrial proteome, several bioinformatic approaches have been developed to aid the identification of novel mitochondrial disease genes within mapped nuclear genetic loci. Often, many classifiers are combined to increase the sensitivity and specificity of the predictions. Here we show that the greatest sensitivity and specificity are obtained by using a combination of seven carefully selected classifiers. We also show that increasing the number of independent prediction methods can paradoxically decrease the accuracy of predicting mitochondrial localization. This approach will help to accelerate the identification of new mitochondrial disease genes by providing a principled way for the selection for combination of appropriate prediction methods of mitochondrial localization of proteins. PMID:21195798
Hamiltonian formalism and path entropy maximization
Davis, Sergio; González, Diego
2015-10-01
Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.
Maximal speed of particles in super-Lévy process
LIN Zheng-yan; CHENG Zong-mao
2008-01-01
We introduce a super-Lévy process and study maximal speed of all particles process is a measure on the set of paths.We study the maximal speed of all particles during a given time period,which turns out to be a function of the packing dimension of the time period.We calculate the Hausdorff dimension of the set of a-fast patlls in the support and the range of the historical super-lévy process.
Excap: maximization of haplotypic diversity of linked markers.
André Kahles
Full Text Available Genetic markers, defined as variable regions of DNA, can be utilized for distinguishing individuals or populations. As long as markers are independent, it is easy to combine the information they provide. For nonrecombinant sequences like mtDNA, choosing the right set of markers for forensic applications can be difficult and requires careful consideration. In particular, one wants to maximize the utility of the markers. Until now, this has mainly been done by hand. We propose an algorithm that finds the most informative subset of a set of markers. The algorithm uses a depth first search combined with a branch-and-bound approach. Since the worst case complexity is exponential, we also propose some data-reduction techniques and a heuristic. We implemented the algorithm and applied it to two forensic caseworks using mitochondrial DNA, which resulted in marker sets with significantly improved haplotypic diversity compared to previous suggestions. Additionally, we evaluated the quality of the estimation with an artificial dataset of mtDNA. The heuristic is shown to provide extensive speedup at little cost in accuracy.
Global haplotype partitioning for maximal associated SNP pairs
Pezeshk Hamid
2009-08-01
Full Text Available Abstract Background Global partitioning based on pairwise associations of SNPs has not previously been used to define haplotype blocks within genomes. Here, we define an association index based on LD between SNP pairs. We use the Fisher's exact test to assess the statistical significance of the LD estimator. By this test, each SNP pair is characterized as associated, independent, or not-statistically-significant. We set limits on the maximum acceptable proportion of independent pairs within all blocks and search for the partitioning with maximal proportion of associated SNP pairs. Essentially, this model is reduced to a constrained optimization problem, the solution of which is obtained by iterating a dynamic programming algorithm. Results In comparison with other methods, our algorithm reports blocks of larger average size. Nevertheless, the haplotype diversity within the blocks is captured by a small number of tagSNPs. Resampling HapMap haplotypes under a block-based model of recombination showed that our algorithm is robust in reproducing the same partitioning for recombinant samples. Our algorithm performed better than previously reported models in a case-control association study aimed at mapping a single locus trait, based on simulation results that were evaluated by a block-based statistical test. Compared to methods of haplotype block partitioning, we performed best on detection of recombination hotspots. Conclusion Our proposed method divides chromosomes into the regions within which allelic associations of SNP pairs are maximized. This approach presents a native design for dimension reduction in genome-wide association studies. Our results show that the pairwise allelic association of SNPs can describe various features of genomic variation, in particular recombination hotspots.
ON THE SPACES OF THE MAXIMAL POINTS
梁基华; 刘应明
2003-01-01
For a continuous domain D, some characterization that the convex powerdomain CD is adomain hull of Max(CD) is given in terms of compact subsets of D. And in this case, it isproved that the set of the maximal points Max(CD) of CD with the relative Scott topology ishomeomorphic to the set of all Scott compact subsets of Max(D) with the topology induced bythe Hausdorff metric derived from a metric on Max(D) when Max(D) is metrizable.
Maximizing without difficulty: A modified maximizing scale and its correlates
Lai, Linda
2010-01-01
... included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cognition, desire for consistency, risk aversion, intrinsic motivation, self-efficacy and perceived workload, whereas...
Maximizing and customer loyalty: Are maximizers less loyal?
Linda Lai
2011-06-01
Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.
Are maximizers really unhappy? The measurement of maximizing tendency,
Dalia L. Diab
2008-06-01
Full Text Available Recent research suggesting that people who maximize are less happy than those who satisfice has received considerable fanfare. The current study investigates whether this conclusion reflects the construct itself or rather how it is measured. We developed an alternative measure of maximizing tendency that is theory-based, has good psychometric properties, and predicts behavioral outcomes. In contrast to the existing maximization measure, our new measure did not correlate with life (dissatisfaction, nor with most maladaptive personality and decision-making traits. We conclude that the interpretation of maximizers as unhappy may be due to poor measurement of the construct. We present a more reliable and valid measure for future researchers to use.
Principles of maximally classical and maximally realistic quantum mechanics
S M Roy
2002-08-01
Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2-dimensional phase space, a maximally realistic quantum mechanics can have quantum probabilities of no more than + 1 complete commuting cets (CCS) of observables coexisting as marginals of one positive phase space density. Here I formulate a stationary principle which gives a nonperturbative deﬁnition of a maximally classical as well as maximally realistic phase space density. I show that the maximally classical trajectories are in fact exactly classical in the simple examples of coherent states and bound states of an oscillator and Gaussian free particle states. In contrast, it is known that the de Broglie–Bohm realistic theory gives highly nonclassical trajectories.
White Brian
2010-11-01
Full Text Available Abstract Background The genus Neisseria contains two important yet very different pathogens, N. meningitidis and N. gonorrhoeae, in addition to non-pathogenic species, of which N. lactamica is the best characterized. Genomic comparisons of these three bacteria will provide insights into the mechanisms and evolution of pathogenesis in this group of organisms, which are applicable to understanding these processes more generally. Results Non-pathogenic N. lactamica exhibits very similar population structure and levels of diversity to the meningococcus, whilst gonococci are essentially recent descendents of a single clone. All three species share a common core gene set estimated to comprise around 1190 CDSs, corresponding to about 60% of the genome. However, some of the nucleotide sequence diversity within this core genome is particular to each group, indicating that cross-species recombination is rare in this shared core gene set. Other than the meningococcal cps region, which encodes the polysaccharide capsule, relatively few members of the large accessory gene pool are exclusive to one species group, and cross-species recombination within this accessory genome is frequent. Conclusion The three Neisseria species groups represent coherent biological and genetic groupings which appear to be maintained by low rates of inter-species horizontal genetic exchange within the core genome. There is extensive evidence for exchange among positively selected genes and the accessory genome and some evidence of hitch-hiking of housekeeping genes with other loci. It is not possible to define a 'pathogenome' for this group of organisms and the disease causing phenotypes are therefore likely to be complex, polygenic, and different among the various disease-associated phenotypes observed.
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2009-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Y. Wang
2015-05-01
Full Text Available Multi-Axis-Differential Optical Absorption Spectroscopy (MAX-DOAS observations of trace gases can be strongly influenced by clouds and aerosols. Thus it is important to identify clouds and characterise their properties. In a recent study Wagner et al. (2014 developed a cloud classification scheme based on the MAX-DOAS measurements themselves with which different "sky conditions" (e.g. clear sky, continuous clouds, broken clouds can be distinguished. Here we apply this scheme to long term MAX-DOAS measurements from 2011 to 2013 in Wuxi, China (31.57° N, 120.31° E. The original algorithm has been modified, in particular in order to account for smaller solar zenith angles (SZA. Instrumental degradation is accounted for to avoid artificial trends of the cloud classification. We compared the results of the MAX-DOAS cloud classification scheme to several independent measurements: aerosol optical depth from a nearby AERONET station and from MODIS, visibility derived from a visibility meter; and various cloud parameters from different satellite instruments (MODIS, OMI, and GOME-2. The most important findings from these comparisons are: (1 most cases characterized as clear sky with low or high aerosol load were associated with the respective AOD ranges obtained by AERONET and MODIS, (2 the observed dependences of MAX-DOAS results on cloud optical thickness and effective cloud fraction from satellite indicate that the cloud classification scheme is sensitive to cloud (optical properties, (3 separation of cloudy scenes by cloud pressure shows that the MAX-DOAS cloud classification scheme is also capable of detecting high clouds, (4 some clear sky conditions, especially with high aerosol load, classified from MAX-DOAS observations corresponding to the optically thin and low clouds derived by satellite observations probably indicate that the satellite cloud products contain valuable information on aerosols.
Jacob, Christian P; Nguyen, Thuy Trang; Dempfle, Astrid; Heine, Monika; Windemuth-Kieselbach, Christine; Baumann, Katarina; Jacob, Florian; Prechtl, Julian; Wittlich, Maike; Herrmann, Martin J; Gross-Lesch, Silke; Lesch, Klaus-Peter; Reif, Andreas
2010-06-01
While an interactive effect of genes with adverse life events is increasingly appreciated in current concepts of depression etiology, no data are presently available on interactions between genetic and environmental (G x E) factors with respect to personality and related disorders. The present study therefore aimed to detect main effects as well as interactions of serotonergic candidate genes (coding for the serotonin transporter, 5-HTT; the serotonin autoreceptor, HTR1A; and the enzyme which synthesizes serotonin in the brain, TPH2) with the burden of life events (#LE) in two independent samples consisting of 183 patients suffering from personality disorders and 123 patients suffering from adult attention deficit/hyperactivity disorder (aADHD). Simple analyses ignoring possible G x E interactions revealed no evidence for associations of either #LE or of the considered polymorphisms in 5-HTT and TPH2. Only the G allele of HTR1A rs6295 seemed to increase the risk of emotional-dramatic cluster B personality disorders (p = 0.019, in the personality disorder sample) and to decrease the risk of anxious-fearful cluster C personality disorders (p = 0.016, in the aADHD sample). We extended the initial simple model by taking a G x E interaction term into account, since this approach may better fit the data indicating that the effect of a gene is modified by stressful life events or, vice versa, that stressful life events only have an effect in the presence of a susceptibility genotype. By doing so, we observed nominal evidence for G x E effects as well as main effects of 5-HTT-LPR and the TPH2 SNP rs4570625 on the occurrence of personality disorders. Further replication studies, however, are necessary to validate the apparent complexity of G x E interactions in disorders of human personality.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-02-26
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.
Hohman, Timothy J; Bush, William S; Jiang, Lan; Brown-Gentry, Kristin D; Torstenson, Eric S; Dudek, Scott M; Mukherjee, Shubhabrata; Naj, Adam; Kunkle, Brian W; Ritchie, Marylyn D; Martin, Eden R; Schellenberg, Gerard D; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Haines, Jonathan L; Thornton-Wells, Tricia A
2016-02-01
Late-onset Alzheimer disease (AD) has a complex genetic etiology, involving locus heterogeneity, polygenic inheritance, and gene-gene interactions; however, the investigation of interactions in recent genome-wide association studies has been limited. We used a biological knowledge-driven approach to evaluate gene-gene interactions for consistency across 13 data sets from the Alzheimer Disease Genetics Consortium. Fifteen single nucleotide polymorphism (SNP)-SNP pairs within 3 gene-gene combinations were identified: SIRT1 × ABCB1, PSAP × PEBP4, and GRIN2B × ADRA1A. In addition, we extend a previously identified interaction from an endophenotype analysis between RYR3 × CACNA1C. Finally, post hoc gene expression analyses of the implicated SNPs further implicate SIRT1 and ABCB1, and implicate CDH23 which was most recently identified as an AD risk locus in an epigenetic analysis of AD. The observed interactions in this article highlight ways in which genotypic variation related to disease may depend on the genetic context in which it occurs. Further, our results highlight the utility of evaluating genetic interactions to explain additional variance in AD risk and identify novel molecular mechanisms of AD pathogenesis.
Maximizing ROI with yield management
Neil Snyder
2001-01-01
.... the technology is based on the concept of yield management, which aims to sell the right product to the right customer at the right price and the right time therefore maximizing revenue, or yield...
Are CEOs Expected Utility Maximizers?
John List; Charles Mason
2009-01-01
Are individuals expected utility maximizers? This question represents much more than academic curiosity. In a normative sense, at stake are the fundamental underpinnings of the bulk of the last half-century's models of choice under uncertainty. From a positive perspective, the ubiquitous use of benefit-cost analysis across government agencies renders the expected utility maximization paradigm literally the only game in town. In this study, we advance the literature by exploring CEO's preferen...
Gaussian maximally multipartite entangled states
Facchi, Paolo; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio
2009-01-01
We introduce the notion of maximally multipartite entangled states (MMES) in the context of Gaussian continuous variable quantum systems. These are bosonic multipartite states that are maximally entangled over all possible bipartitions of the system. By considering multimode Gaussian states with constrained energy, we show that perfect MMESs, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of MMESs and their frustration for n <= 7.
All maximally entangling unitary operators
Cohen, Scott M. [Department of Physics, Duquesne University, Pittsburgh, Pennsylvania 15282 (United States); Department of Physics, Carnegie-Mellon University, Pittsburgh, Pennsylvania 15213 (United States)
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo
2016-01-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
The Negative Consequences of Maximizing in Friendship Selection.
Newman, David B; Schug, Joanna; Yuki, Masaki; Yamada, Junko; Nezlek, John B
2017-02-27
Previous studies have shown that the maximizing orientation, reflecting a motivation to select the best option among a given set of choices, is associated with various negative psychological outcomes. In the present studies, we examined whether these relationships extend to friendship selection and how the number of options for friends moderated these effects. Across 5 studies, maximizing in selecting friends was negatively related to life satisfaction, positive affect, and self-esteem, and was positively related to negative affect and regret. In Study 1, a maximizing in selecting friends scale was created, and regret mediated the relationships between maximizing and well-being. In a naturalistic setting in Studies 2a and 2b, the tendency to maximize among those who participated in the fraternity and sorority recruitment process was negatively related to satisfaction with their selection, and positively related to regret and negative affect. In Study 3, daily levels of maximizing were negatively related to daily well-being, and these relationships were mediated by daily regret. In Study 4, we extended the findings to samples from the U.S. and Japan. When participants who tended to maximize were faced with many choices, operationalized as the daily number of friends met (Study 3) and relational mobility (Study 4), the opportunities to regret a decision increased and further diminished well-being. These findings imply that, paradoxically, attempts to maximize when selecting potential friends is detrimental to one's well-being. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Modularity maximization using completely positive programming
Yazdanparast, Sakineh; Havens, Timothy C.
2017-04-01
Community detection is one of the most prominent problems of social network analysis. In this paper, a novel method for Modularity Maximization (MM) for community detection is presented which exploits the Alternating Direction Augmented Lagrangian (ADAL) method for maximizing a generalized form of Newman's modularity function. We first transform Newman's modularity function into a quadratic program and then use Completely Positive Programming (CPP) to map the quadratic program to a linear program, which provides the globally optimal maximum modularity partition. In order to solve the proposed CPP problem, a closed form solution using the ADAL merged with a rank minimization approach is proposed. The performance of the proposed method is evaluated on several real-world data sets used for benchmarks community detection. Simulation results shows the proposed technique provides outstanding results in terms of modularity value for crisp partitions.
A. Garmroodi Asil
2017-09-01
To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.
Algebraic curves of maximal cyclicity
Caubergh, Magdalena; Dumortier, Freddy
2006-01-01
The paper deals with analytic families of planar vector fields, studying methods to detect the cyclicity of a non-isolated closed orbit, i.e. the maximum number of limit cycles that can locally bifurcate from it. It is known that this multi-parameter problem can be reduced to a single-parameter one, in the sense that there exist analytic curves in parameter space along which the maximal cyclicity can be attained. In that case one speaks about a maximal cyclicity curve (mcc) in case only the number is considered and of a maximal multiplicity curve (mmc) in case the multiplicity is also taken into account. In view of obtaining efficient algorithms for detecting the cyclicity, we investigate whether such mcc or mmc can be algebraic or even linear depending on certain general properties of the families or of their associated Bautin ideal. In any case by well chosen examples we show that prudence is appropriate.
BOUNDEDNESS OF MAXIMAL SINGULAR INTEGRALS
CHEN JIECHENG; ZHU XIANGRONG
2005-01-01
The authors study the singular integrals under the Hormander condition and the measure not satisfying the doubling condition. At first, if the corresponding singular integral is bounded from L2 to itseff, it is proved that the maximal singu lar integral is bounded from L∞ to RBMO except that it is infinite μ-a.e. on Rd. A sufficient condition and a necessary condition such that the maximal singular integral is bounded from L2 to itself are also obtained. There is a small gap between the two conditions.
IMRank: Influence Maximization via Finding Self-Consistent Ranking
Cheng, Suqi; Shen, Hua-Wei; Huang, Junming; Chen, Wei; Cheng, Xue-Qi
2014-01-01
Influence maximization, fundamental for word-of-mouth marketing and viral marketing, aims to find a set of seed nodes maximizing influence spread on social network. Early methods mainly fall into two paradigms with certain benefits and drawbacks: (1)Greedy algorithms, selecting seed nodes one by one, give a guaranteed accuracy relying on the accurate approximation of influence spread with high computational cost; (2)Heuristic algorithms, estimating influence spread using efficient heuristics,...
Understanding maximal repetitions in strings
Crochemore, Maxime
2008-01-01
The cornerstone of any algorithm computing all repetitions in a string of length n in O(n) time is the fact that the number of runs (or maximal repetitions) is O(n). We give a simple proof of this result. As a consequence of our approach, the stronger result concerning the linearity of the sum of exponents of all runs follows easily.
Finding the Maximizers of the Information Divergence from an Exponential Family
Rauh, Johannes
2009-01-01
This paper investigates maximizers of the information divergence from an exponential family $E$. It is shown that the $rI$-projection of a maximizer $P$ to $E$ is a convex combination of $P$ and a probability measure $P_-$ with disjoint support and the same value of the sufficient statistics $A$. This observation can be used to transform the original problem of maximizing $D(\\cdot||E)$ over the set of all probability measures into the maximization of a function $\\Dbar$ over a convex subset of $\\ker A$. The global maximizers of both problems correspond to each other. Furthermore, finding all local maximizers of $\\Dbar$ yields all local maximizers of $D(\\cdot||E)$. This paper also proposes two algorithms to find the maximizers of $\\Dbar$ and applies them to two examples, where the maximizers of $D(\\cdot||E)$ were not known before.
Computing Maximally Supersymmetric Scattering Amplitudes
Stankowicz, James Michael, Jr.
This dissertation reviews work in computing N = 4 super-Yang--Mills (sYM) and N = 8 maximally supersymmetric gravity (mSUGRA) scattering amplitudes in D = 4 spacetime dimensions in novel ways. After a brief introduction and overview in Ch. 1, the various techniques used to construct amplitudes in the remainder of the dissertation are discussed in Ch. 2. This includes several new concepts such as d log and pure integrand bases, as well as how to construct the amplitude using exactly one kinematic point where it vanishes. Also included in this chapter is an outline of the Mathematica package on shell diagrams and numerics.m (osdn) that was developed for the computations herein. The rest of the dissertation is devoted to explicit examples. In Ch. 3, the starting point is tree-level sYM amplitudes that have integral representations with residues that obey amplitude relations. These residues are shown to have corresponding residue numerators that allow a double copy prescription that results in mSUGRA residues. In Ch. 4, the two-loop four-point sYM amplitude is constructed in several ways, showcasing many of the techniques of Ch. 2; this includes an example of how to use osdn. The two-loop five-point amplitude is also presented in a pure integrand representation with comments on how it was constructed from one homogeneous cut of the amplitude. On-going work on the two-loop n-point amplitude is presented at the end of Ch. 4. In Ch. 5, the three-loop four-point amplitude is presented in the d log representation and in the pure integrand representation. In Ch. 6, there are several examples of four- through seven-loop planar diagrams that illustrate how considerations of the singularity structure of the amplitude underpin dual-conformal invariance. Taken with the previous examples, this is additional evidence that the structure known to exist in the planar sector extends to the full theory. At the end of this chapter is a proof that all mSUGRA amplitudes have a pole at
Integral circulant graphs of prime power order with maximal energy
Sander, Jürgen W; 10.1016/j.laa.2011.05.039
2011-01-01
The energy of a graph is the sum of the moduli of the eigenvalues of its adjacency matrix. We study the energy of integral circulant graphs, also called gcd graphs, which can be characterized by their vertex count n and a set D of divisors of n in such a way that they have vertex set Zn and edge set {{a, b} : a, b in Zn; gcd(a - b, n) in D}. Using tools from convex optimization, we study the maximal energy among all integral circulant graphs of prime power order ps and varying divisor sets D. Our main result states that this maximal energy approximately lies between s(p - 1)p^(s-1) and twice this value. We construct suitable divisor sets for which the energy lies in this interval. We also characterize hyperenergetic integral circulant graphs of prime power order and exhibit an interesting topological property of their divisor sets.
NEIGHBORHOOD UNION OF INDEPENDENT SETS AND HAMILTONICITY OF CLAWFREE GRAPHS%独立集的邻域并和无爪图的哈密尔顿性
徐新萍
2005-01-01
Let G be a graph,for any u∈V(G),let N(u) denote the neighborhood of u and d(u)=|N(u)| be the degree of u.For any UV(G)，let N(U)=∪ u∈UN(u), and d(U)=|N(U)|.A graph G is called clawfree if it has no induced subgraph isomorphic to K1,3.One of the fundamental results concerning cycles in clawfree graphs is d ue to Tian Feng,et al.: Let G be a 2connected clawfree graph of ordern,and d(u)+d(v)+d(w)≥n-2 for every independent vertex set {u,v,w} of G,then G is Hamiltonian.
Note on maximal distance separable codes
YANG Jian-sheng; WANG De-xiu; JIN Qing-fang
2009-01-01
In this paper, the maximal length of maximal distance separable(MDS)codes is studied, and a new upper bound formula of the maximal length of MDS codes is obtained. Especially, the exact values of the maximal length of MDS codes in some parameters are given.
Adaptive maximal poisson-disk sampling on surfaces
Yan, Dongming
2012-01-01
In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM.
Gap processing for adaptive maximal Poisson-disk sampling
Yan, Dongming
2013-09-01
In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.
Ringe, Wolf-Georg
2013-01-01
This paper re-evaluates the corporate governance concept of ‘board independence’ against the disappointing experiences during the 2007-08 financial crisis. Independent or outside directors had long been seen as an essential tool to improve the monitoring role of the board. Yet the crisis revealed...... that they did not prevent firms' excessive risk taking; further, these directors sometimes showed serious deficits in understanding the business they were supposed to control, and remained passive in addressing structural problems. A closer look reveals that under the surface of seemingly unanimous consensus...... about board independence in Western jurisdictions, a surprising disharmony prevails about the justification, extent and purpose of independence requirements. These considerations lead me to question the benefits of the current system. Instead, this paper proposes a new, ‘functional’ concept of board...
Sensitivity to conversational maxims in deaf and hearing children.
Surian, Luca; Tedoldi, Mariantonia; Siegal, Michael
2010-09-01
We investigated whether access to a sign language affects the development of pragmatic competence in three groups of deaf children aged 6 to 11 years: native signers from deaf families receiving bimodal/bilingual instruction, native signers from deaf families receiving oralist instruction and late signers from hearing families receiving oralist instruction. The performance of these children was compared to a group of hearing children aged 6 to 7 years on a test designed to assess sensitivity to violations of conversational maxims. Native signers with bimodal/bilingual instruction were as able as the hearing children to detect violations that concern truthfulness (Maxim of Quality) and relevance (Maxim of Relation). On items involving these maxims, they outperformed both the late signers and native signers attending oralist schools. These results dovetail with previous findings on mindreading in deaf children and underscore the role of early conversational experience and instructional setting in the development of pragmatics.
无
2006-01-01
Milo Djukanovic, Prime Minister of Montenegro, won a key referendum May 21 when voters in his tiny, mountainous nation endorsed a plan to split from Serbia and become an independent state. This marked a final step in the breakup of the former Yugoslavia formed by six republics.
Maximal strength training improves cycling economy in competitive cyclists.
Sunde, Arnstein; Støren, Oyvind; Bjerkaas, Marius; Larsen, Morten H; Hoff, Jan; Helgerud, Jan
2010-08-01
The purpose of the present study was to investigate the effect of maximal strength training on cycling economy (CE) at 70% of maximal oxygen consumption (Vo2max), work efficiency in cycling at 70% Vo2max, and time to exhaustion at maximal aerobic power. Responses in 1 repetition maximum (1RM) and rate of force development (RFD) in half-squats, Vo2max, CE, work efficiency, and time to exhaustion at maximal aerobic power were examined. Sixteen competitive road cyclists (12 men and 4 women) were randomly assigned into either an intervention or a control group. Thirteen (10 men and 3 women) cyclists completed the study. The intervention group (7 men and 1 woman) performed half-squats, 4 sets of 4 repetitions maximum, 3 times per week for 8 weeks, as a supplement to their normal endurance training. The control group continued their normal endurance training during the same period. The intervention manifested significant (p < 0.05) improvements in 1RM (14.2%), RFD (16.7%), CE (4.8%), work efficiency (4.7%), and time to exhaustion at pre-intervention maximal aerobic power (17.2%). No changes were found in Vo2max or body weight. The control group exhibited an improvement in work efficiency (1.4%), but this improvement was significantly (p < 0.05) smaller than that in the intervention group. No changes from pre- to postvalues in any of the other parameters were apparent in the control group. In conclusion, maximal strength training for 8 weeks improved CE and efficiency and increased time to exhaustion at maximal aerobic power among competitive road cyclists, without change in maximal oxygen uptake, cadence, or body weight. Based on the results from the present study, we advise cyclists to include maximal strength training in their training programs.
Are Independent Fiscal Institutions Really Independent?
Slawomir Franek
2015-08-01
Full Text Available In the last decade the number of independent fiscal institutions (known also as fiscal councils has tripled. They play an important oversight role over fiscal policy-making in democratic societies, especially as they seek to restore public finance stability in the wake of the recent financial crisis. Although common functions of such institutions include a role in analysis of fiscal policy, forecasting, monitoring compliance with fiscal rules or costing of spending proposals, their roles, resources and structures vary considerably across countries. The aim of the article is to determine the degree of independence of such institutions based on the analysis of the independence index of independent fiscal institutions. The analysis of this index values may be useful to determine the relations between the degree of independence of fiscal councils and fiscal performance of particular countries. The data used to calculate the index values will be derived from European Commission and IMF, which collect sets of information about characteristics of activity of fiscal councils.
Algorithms over partially ordered sets
Baer, Robert M.; Østerby, Ole
1969-01-01
in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....
Asymptotics of robust utility maximization
Knispel, Thomas
2012-01-01
For a stochastic factor model we maximize the long-term growth rate of robust expected power utility with parameter $\\lambda\\in(0,1)$. Using duality methods the problem is reformulated as an infinite time horizon, risk-sensitive control problem. Our results characterize the optimal growth rate, an optimal long-term trading strategy and an asymptotic worst-case model in terms of an ergodic Bellman equation. With these results we propose a duality approach to a "robust large deviations" criterion for optimal long-term investment.
Multivariate residues and maximal unitarity
Søgaard, Mads; Zhang, Yang
2013-12-01
We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.
Maximal Congruences on Some Semigroups
Jintana Sanwong; R.P. Sullivan
2007-01-01
In 1976 Howie proved that a finite congruence-free semigroup is a simple group if it has at least three elements but no zero elementInfinite congruence-free semigroups are far more complicated to describe, but some have been constructed using semigroups of transformations (for example, by Howie in 1981 and by Marques in 1983)Here, forcertain semigroups S of numbers and of transformations, we determine all congruences p on S such that S/p is congruence-free, that is, we describe all maximal congruences on such semigroups S.
Knowledge discovery by accuracy maximization.
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-04-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.
Inapproximability of maximal strip recovery
Jiang, Minghui
2009-01-01
In comparative genomic, the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes. For the reliable recovery of syntenic blocks, noise and ambiguities in the genomic maps need to be removed first. Maximal Strip Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities. Given $d$ genomic maps as sequences of gene markers, the objective of \\msr{d} is to find $d$ subsequences, one subsequence of each genomic map, such that the total length of syntenic blocks in these subsequences is maximized. For any constant $d \\ge 2$, a polynomial-time 2d-approximation for \\msr{d} was previously known. In this paper, we show that for any $d \\ge 2$, \\msr{d} is APX-hard, even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map. Moreover, we provi...
Distributed Maximality based CTL Model Checking
Djamel Eddine Saidouni
2010-05-01
Full Text Available In this paper we investigate an approach to perform a distributed CTL Model checker algorithm on a network of workstations using Kleen three value logic, the state spaces is partitioned among the network nodes, We represent the incomplete state spaces as a Maximality labeled Transition System MLTS which are able to express true concurrency. we execute in parallel the same algorithm in each node, for a certain property on an incomplete MLTS , this last compute the set of states which satisfy or which if they fail are assigned the value .The third value mean unknown whether true or false because the partial state space lacks sufficient information needed for a precise answer concerning the complete state space .To solve this problem each node exchange the information needed to conclude the result about the complete state space. The experimental version of the algorithm is currently being implemented using the functional programming language Erlang.
Witten spinors on maximal, conformally flat hypersurfaces
Frauendiener, Jörg; Szabados, László B
2011-01-01
The boundary conditions that exclude zeros of the solutions of the Witten equation (and hence guarantee the existence of a 3-frame satisfying the so-called special orthonormal frame gauge conditions) are investigated. We determine the general form of the conformally invariant boundary conditions for the Witten equation, and find the boundary conditions that characterize the constant and the conformally constant spinor fields among the solutions of the Witten equations on compact domains in extrinsically and intrinsically flat, and on maximal, intrinsically globally conformally flat spacelike hypersurfaces, respectively. We also provide a number of exact solutions of the Witten equation with various boundary conditions (both at infinity and on inner or outer boundaries) that single out nowhere vanishing spinor fields on the flat, non-extreme Reissner--Nordstr\\"om and Brill--Lindquist data sets. Our examples show that there is an interplay between the boundary conditions, the global topology of the hypersurface...
Maximal energy extraction under discrete diffusive exchange
Hay, M. J., E-mail: hay@princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Schiff, J. [Department of Mathematics, Bar-Ilan University, Ramat Gan 52900 (Israel); Fisch, N. J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Maximal energy extraction under discrete diffusive exchange
Hay, Michael J; Fisch, Nathaniel J
2015-01-01
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake
Convertino, V. A.
1997-01-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
Twitch interpolation technique in testing of maximal muscle strength
Bülow, P M; Nørregaard, J; Danneskiold-Samsøe, B
1993-01-01
The aim was to study the methodological aspects of the muscle twitch interpolation technique in estimating the maximal force of contraction in the quadriceps muscle utilizing commercial muscle testing equipment. Six healthy subjects participated in seven sets of experiments testing the effects on...
Maximal Regularity of the Discrete Harmonic Oscillator Equation
Airton Castro
2009-01-01
Full Text Available We give a representation of the solution for the best approximation of the harmonic oscillator equation formulated in a general Banach space setting, and a characterization of lp-maximal regularity—or well posedness—solely in terms of R-boundedness properties of the resolvent operator involved in the equation.
Neuromuscular fatigue during dynamic maximal strength and hypertrophic resistance loadings.
Walker, Simon; Davis, Lisa; Avela, Janne; Häkkinen, Keijo
2012-06-01
The purpose of this study was to compare the acute neuromuscular fatigue during dynamic maximal strength and hypertrophic loadings, known to cause different adaptations underlying strength gain during training. Thirteen healthy, untrained males performed two leg press loadings, one week apart, consisting of 15 sets of 1 repetition maximum (MAX) and 5 sets of 10 repetition maximums (HYP). Concentric load and muscle activity, electromyography (EMG) amplitude and median frequency, was assessed throughout each set. Additionally, maximal bilateral isometric force and muscle activity was assessed pre-, mid-, and up to 30 min post-loading. Concentric load during MAX was decreased after set 10 (Pmuscle activity during HYP loading. Copyright © 2011 Elsevier Ltd. All rights reserved.
The maximal D = 4 supergravities
Wit, Bernard de [Institute for Theoretical Physics and Spinoza Institute, Utrecht University, Postbus 80.195, NL-3508 TD Utrecht (Netherlands); Samtleben, Henning [Laboratoire de Physique, ENS Lyon, 46 allee d' Italie, F-69364 Lyon CEDEX 07 (France); Trigiante, Mario [Dept. of Physics, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Turin (Italy)
2007-06-15
All maximal supergravities in four space-time dimensions are presented. The ungauged Lagrangians can be encoded in an E{sub 7(7)}-Sp(56; R)/GL(28) matrix associated with the freedom of performing electric/magnetic duality transformations. The gauging is defined in terms of an embedding tensor {theta} which encodes the subgroup of E{sub 7(7)} that is realized as a local invariance. This embedding tensor may imply the presence of magnetic charges which require corresponding dual gauge fields. The latter can be incorporated by using a recently proposed formulation that involves tensor gauge fields in the adjoint representation of E{sub 7(7)}. In this formulation the results take a universal form irrespective of the electric/magnetic duality basis. We present the general class of supersymmetric and gauge invariant Lagrangians and discuss a number of applications.
Constraint Propagation as Information Maximization
Abdallah, A Nait
2012-01-01
Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
About non-maximality of the action functional
W. Freire
2012-09-01
Full Text Available In this work first we review some cases where the action exhibits a minimal or a saddle-point criticality for velocity-independent potentials (V(x, t and maximum when the potential is velocity-dependent (V(x,ẋ,t. In the following we will use the functional (“directional” derivative of second order to present a mathematically rigorous proof of the non-maximality of the classical functional action for potentials V(x, t velocity-independent.
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Efficient Conservation in a Utility-Maximization Framework
Frank W. Davis
2006-06-01
Full Text Available Systematic planning for biodiversity conservation is being conducted at scales ranging from global to national to regional. The prevailing planning paradigm is to identify the minimum land allocations needed to reach specified conservation targets or maximize the amount of conservation accomplished under an area or budget constraint. We propose a more general formulation for setting conservation priorities that involves goal setting, assessing the current conservation system, developing a scenario of future biodiversity given the current conservation system, and allocating available conservation funds to alter that scenario so as to maximize future biodiversity. Under this new formulation for setting conservation priorities, the value of a site depends on resource quality, threats to resource quality, and costs. This planning approach is designed to support collaborative processes and negotiation among competing interest groups. We demonstrate these ideas with a case study of the Sierra Nevada bioregion of California.
Viral quasispecies assembly via maximal clique enumeration.
Töpfer, Armin; Marschall, Tobias; Bull, Rowena A; Luciani, Fabio; Schönhuth, Alexander; Beerenwinkel, Niko
2014-03-01
Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Viral quasispecies assembly via maximal clique enumeration.
Armin Töpfer
2014-03-01
Full Text Available Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Cycle length maximization in PWRs using empirical core models
Okafor, K.C.; Aldemir, T.
1987-01-01
The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem.
The F-Theorem and F-Maximization
Pufu, Silviu S
2016-01-01
This contribution contains a review of the role of the three-sphere free energy F in recent developments related to the F-theorem and F-maximization. The F-theorem states that for any Lorentz-invariant RG trajectory connecting a conformal field theory CFT_UV in the ultraviolet to a conformal field theory CFT_IR, the F-coefficient decreases: F_UV > F_IR. I provide many examples of CFTs where one can compute F, approximately or exactly, and discuss various checks of the F-theorem. F-maximization is the principle that in an N=2 SCFT, viewed as the deep IR limit of an RG trajectory preserving N=2 supersymmetry, the superconformal R-symmetry maximizes F within the set of all R-symmetries preserved by the RG trajectory. I review the derivation of this result and provide examples.
Maximal supports and Schur-positivity among connected skew shapes
McNamara, Peter R W
2011-01-01
The Schur-positivity order on skew shapes is defined by B \\leq A if the difference s_A - s_B is Schur-positive. It is an open problem to determine those connected skew shapes that are maximal with respect to this ordering. A strong sufficient condition for the Schur-positivity of s_A - s_B is that the support of B is contained in that of A, where the support of B is defined to be the set of partitions lambda for which s_lambda appears in the Schur expansion of s_B. We show that to determine the maximal connected skew shapes in the Schur-positivity order and this support containment order, it suffices to consider a special class of ribbon shapes. We explicitly determine the support for these ribbon shapes, thereby determining the maximal connected skew shapes in the support containment order.
Limiting distribution of maximal crossing and nesting of Poissonized random matchings
Baik, Jinho
2011-01-01
The notion of r-crossing and r-nesting of a complete match- ing was introduced and a symmetry property was proved by Chen, Deng, Du, Stanley and Yan in 2007. We consider random matchings of large size and study the maximal crossing and the maximal nesting. It is known that the marginal distribution of each of them converges to the GOE Tracy-Widom distribution. We show that the maximal crossing and the maximal nesting becomes independent asymptotically, and eval- uate the joint distribution for the Poissonized random matchings explic- itly to the first correction term. This leads to an evaluation of the asymptotic of the covariance. Furthermore, we compute the explicit second correction term of the distributions function of two ob jects: (a) the length of the longest increasing subsequence of Poissonized random permutation and (b) the maximal crossing, and hence also the maximal nesting, of Poissonized random matching.
StaticGreedy: solving the scalability-accuracy dilemma in influence maximization
Cheng, Suqi; Shen, Huawei; Huang, Junming; Zhang, Guoqing; Cheng, Xueqi
2012-01-01
Influence maximization, defined as a problem of finding a set of seed nodes to trigger a maximized spread of influence, is crucial to viral marketing on social networks. For practical viral marketing on large scale social networks, it is required that influence maximization algorithms should have both guaranteed accuracy and high scalability. However, existing algorithms suffer a scalability-accuracy dilemma: conventional greedy algorithms guarantee the accuracy with expensive computation, wh...
Moving multiple sinks through wireless sensor networks for lifetime maximization.
Petrioli, Chiara (Universita di Roma); Carosi, Alessio (Universita di Roma); Basagni, Stefano (Northeastern University); Phillips, Cynthia Ann
2008-01-01
Unattended sensor networks typically watch for some phenomena such as volcanic events, forest fires, pollution, or movements in animal populations. Sensors report to a collection point periodically or when they observe reportable events. When sensors are too far from the collection point to communicate directly, other sensors relay messages for them. If the collection point location is static, sensor nodes that are closer to the collection point relay far more messages than those on the periphery. Assuming all sensor nodes have roughly the same capabilities, those with high relay burden experience battery failure much faster than the rest of the network. However, since their death disconnects the live nodes from the collection point, the whole network is then dead. We consider the problem of moving a set of collectors (sinks) through a wireless sensor network to balance the energy used for relaying messages, maximizing the lifetime of the network. We show how to compute an upper bound on the lifetime for any instance using linear and integer programming. We present a centralized heuristic that produces sink movement schedules that produce network lifetimes within 1.4% of the upper bound for realistic settings. We also present a distributed heuristic that produces lifetimes at most 25:3% below the upper bound. More specifically, we formulate a linear program (LP) that is a relaxation of the scheduling problem. The variables are naturally continuous, but the LP relaxes some constraints. The LP has an exponential number of constraints, but we can satisfy them all by enforcing only a polynomial number using a separation algorithm. This separation algorithm is a p-median facility location problem, which we can solve efficiently in practice for huge instances using integer programming technology. This LP selects a set of good sensor configurations. Given the solution to the LP, we can find a feasible schedule by selecting a subset of these configurations, ordering them
Maximal inequalities for demimartingales and their applications
WANG XueJun; HU ShuHe
2009-01-01
In this paper,we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides.The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob's type maximal inequality for demimartingales,strong laws of large numbers and growth rate for demimartingales and associated random variables.At last,we give an equivalent condition of uniform integrability for demisubmartingales.
Maximal inequalities for demimartingales and their applications
无
2009-01-01
In this paper, we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides. The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob’s type maximal inequality for demimartingales, strong laws of large numbers and growth rate for demimartingales and associated random variables. At last, we give an equivalent condition of uniform integrability for demisubmartingales.
A Data-Based Approach to Social Influence Maximization
Goyal, Amit; Lakshmanan, Laks V S
2011-01-01
Influence maximization is the problem of finding a set of users in a social network, such that by targeting this set, one maximizes the expected spread of influence in the network. Most of the literature on this topic has focused exclusively on the social graph, overlooking historical data, i.e., traces of past action propagations. In this paper, we study influence maximization from a novel data-based perspective. In particular, we introduce a new model, which we call credit distribution, that directly leverages available propagation traces to learn how influence flows in the network and uses this to estimate expected influence spread. Our approach also learns the different levels of influenceability of users, and it is time-aware in the sense that it takes the temporal nature of influence into account. We show that influence maximization under the credit distribution model is NP-hard and that the function that defines expected spread under our model is submodular. Based on these, we develop an approximation ...
Task-oriented maximally entangled states
Agrawal, Pankaj; Pradhan, B, E-mail: agrawal@iopb.res.i, E-mail: bpradhan@iopb.res.i [Institute of Physics, Sachivalaya Marg, Bhubaneswar, Orissa 751 005 (India)
2010-06-11
We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.
Inflation in maximal gauged supergravities
Kodama, Hideo [Theory Center, KEK,Tsukuba 305-0801 (Japan); Department of Particles and Nuclear Physics,The Graduate University for Advanced Studies,Tsukuba 305-0801 (Japan); Nozawa, Masato [Dipartimento di Fisica, Università di Milano, and INFN, Sezione di Milano,Via Celoria 16, 20133 Milano (Italy)
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Bot, Radu Ioan
2008-01-01
In this paper we introduce the notion of enlargement of a positive set in SSD spaces. To a maximally positive set $A$ we associate a family of enlargements $\\E(A)$ and characterize the smallest and biggest element in this family with respect to the inclusion relation. We also emphasize the existence of a bijection between the subfamily of closed enlargements of $\\E(A)$ and the family of so-called representative functions of $A$. We show that the extremal elements of the latter family are two functions recently introduced and studied by Stephen Simons. In this way we extend to SSD spaces some former results given for monotone and maximally monotone sets in Banach spaces.
Maximal inequalities for bessel processes
Graversen SE
1998-01-01
Full Text Available It is proved that the uniform law of large numbers (over a random parameter set for the -dimensional ( Bessel process started at 0 is valid: for all stopping times for . The rate obtained (on the right-hand side is shown to be the best possible. The following inequality is gained as a consequence: for all stopping times for , where the constant satisfies as . This answers a question raised in [4]. The method of proof relies upon representing the Bessel process as a time changed geometric Brownian motion. The main emphasis of the paper is on the method of proof and on the simplicity of solution.
Reflection quasilattices and the maximal quasilattice
Boyle, Latham; Steinhardt, Paul J.
2016-08-01
We introduce the concept of a reflection quasilattice, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e., Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we explain that reflection quasilattices only exist in dimensions two, three, and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. Unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. We tabulate the complete set of scale factors for all reflection quasilattices in dimension d >2 , and for all those with quadratic irrational scale factors in d =2 .
Are all maximally entangled states pure?
Cavalcanti, D; Terra-Cunha, M O
2005-01-01
In this Letter we study if all maximally entangled states are pure through several entanglement monotones. Our conclusions allow us to generalize the idea of monogamy of entanglement. Then we propose a polygamy of entanglement, which express that if a general multipartite state is maximally entangled it is necessarily factorized by any other system.
Sampling and Representation Complexity of Revenue Maximization
Dughmi, Shaddin; Han, Li; Nisan, Noam
2014-01-01
We consider (approximate) revenue maximization in auctions where the distribution on input valuations is given via "black box" access to samples from the distribution. We observe that the number of samples required -- the sample complexity -- is tightly related to the representation complexity of an approximately revenue-maximizing auction. Our main results are upper bounds and an exponential lower bound on these complexities.
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing b...
Alternative trailer configurations for maximizing payloads
Jason D. Thompson; Dana Mitchell; John Klepac
2017-01-01
In order for harvesting contractors to stay ahead of increasing costs, it is imperative that they employ all options to maximize productivity and efficiency. Transportation can account for half the cost to deliver wood to a mill. Contractors seek to maximize truck payload to increase productivity. The Forest Operations Research Unit, Southern Research Station, USDA...
Cohomology of Weakly Reducible Maximal Triangular Algebras
董浙; 鲁世杰
2000-01-01
In this paper, we introduce the concept of weakly reducible maximal triangular algebras φwhich form a large class of maximal triangular algebras. Let B be a weakly closed algebra containing 5φ, we prove that the cohomology spaces Hn(φ, B) (n≥1) are trivial.
Maximal Subsemigroups of Finite Transformation Semigroups K(n, r)
Hao Bo YANG; Xiu Liang YANG
2004-01-01
Let Tn be the full transformation semigroup on the n-element set Xn. For an arbitrary integer r such that 2 ≤ r ≤ n - 1, we completely describe the maximal subsemigroups of the semigroup K(n, r) = {α∈ Tn: |im α| ≤ r}. We also formulate the cardinal number of such subsemigroups which is an answer to Problem 46 of Tetrad in 1969, concerning the number of subsemigroups of Tn.
Origin of Constrained Maximal CP Violation in Flavor Symmetry
He, Hong-Jian; Xu, Xun-Jie
2015-01-01
Current data from neutrino oscillation experiments are in good agreement with $\\delta=-\\pi/2$ and $\\theta_{23} = \\pi/4$. We define the notion of "constrained maximal CP violation" for these features and study their origin in flavor symmetry models. We give various parametrization-independent definitions of constrained maximal CP violation and present a theorem on how it can be generated. This theorem takes advantage of residual symmetries in the neutrino and charged lepton mass matrices, and states that, up to a few exceptions, $\\delta=\\pm\\pi/2$ and $\\theta_{23} = \\pi/4$ are generated when those symmetries are real. The often considered $\\mu$-$\\tau$ reflection symmetry, as well as specific discrete subgroups of $O(3)$, are special case of our theorem.
Dynamically Disordered Quantum Walk as a Maximal Entanglement Generator
Vieira, Rafael; Amorim, Edgard P. M.; Rigolin, Gustavo
2013-11-01
We show that the entanglement between the internal (spin) and external (position) degrees of freedom of a qubit in a random (dynamically disordered) one-dimensional discrete time quantum random walk (QRW) achieves its maximal possible value asymptotically in the number of steps, outperforming the entanglement attained by using ordered QRW. The disorder is modeled by introducing an extra random aspect to QRW, a classical coin that randomly dictates which quantum coin drives the system’s time evolution. We also show that maximal entanglement is achieved independently of the initial state of the walker, study the number of steps the system must move to be within a small fixed neighborhood of its asymptotic limit, and propose two experiments where these ideas can be tested.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
Maximal strength, muscular endurance and inflammatory biomarkers in young adult men.
Vaara, J P; Vasankari, T; Fogelholm, M; Häkkinen, K; Santtila, M; Kyröläinen, H
2014-12-01
The aim was to study associations of maximal strength and muscular endurance with inflammatory biomarkers independent of cardiorespiratory fitness in those with and without abdominal obesity. 686 young healthy men participated (25±5 years). Maximal strength was measured via isometric testing using dynamo-meters to determine maximal strength index. Muscular endurance index consisted of push-ups, sit-ups and repeated squats. An indirect cycle ergometer test until exhaustion was used to estimate maximal aerobic capacity (VO2max). Participants were stratified according to those with (>102 cm) and those without abdominal obesity (obesity (β=-0.08, -0.14, respectively) (pobesity (β=-0.11, -0.26, respectively) (p<0.05). This cross-sectional study demonstrated that muscular fitness is inversely associated with C-reactive protein and IL-6 concentrations in young adult men independent of cardiorespi-ratory fitness.
Remote State Preparation via a Non-Maximally Entangled Channel
郑亦庄; 顾永建; 郭光灿
2002-01-01
We investigate remote state preparation (RSP) via a non-maximally entangled channel for three cases: a general qubit; a special ensemble of qubits (qubit states on the equator of the Bloch sphere); and an asymptotic limit of N copies ofa general state. The results show that the classical communication cost of RSP for the two latter cases can be less than that of teleportation, but for the first case, in a restricted setting, the classical communication cost is equal to that of teleportation. Whether or not this is the case for a more general setting is still an open question.
Statistical mechanics of influence maximization with thermal noise
Lynn, Christopher W.; Lee, Daniel D.
2017-03-01
The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.
Maximizing your return on people.
Bassi, Laurie; McMurrer, Daniel
2007-03-01
Though most traditional HR performance metrics don't predict organizational performance, alternatives simply have not existed--until now. During the past ten years, researchers Laurie Bassi and Daniel McMurrer have worked to develop a system that allows executives to assess human capital management (HCM) and to use those metrics both to predict organizational performance and to guide organizations' investments in people. The new framework is based on a core set of HCM drivers that fall into five major categories: leadership practices, employee engagement, knowledge accessibility, workforce optimization, and organizational learning capacity. By employing rigorously designed surveys to score a company on the range of HCM practices across the five categories, it's possible to benchmark organizational HCM capabilities, identify HCM strengths and weaknesses, and link improvements or back-sliding in specific HCM practices with improvements or shortcomings in organizational performance. The process requires determining a "maturity" score for each practice, based on a scale of 1 (low) to 5 (high). Over time, evolving maturity scores from multiple surveys can reveal progress in each of the HCM practices and help a company decide where to focus improvement efforts that will have a direct impact on performance. The authors draw from their work with American Standard, South Carolina's Beaufort County School District, and a bevy of financial firms to show how improving HCM scores led to increased sales, safety, academic test scores, and stock returns. Bassi and McMurrer urge HR departments to move beyond the usual metrics and begin using HCM measurement tools to gauge how well people are managed and developed throughout the organization. In this new role, according to the authors, HR can take on strategic responsibility and ensure that superior human capital management becomes central to the organization's culture.
The effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake.
Oh, Deuk-Ja; Hong, Hyeon-Ok; Lee, Bo-Ae
2016-02-01
The purpose of this study is to investigate the effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake. To achieve the purpose of the study, a total of 30 subjects were selected, including 15 people who performed continued regular exercises and 15 people as the control group. With regard to data processing, the IBM SPSS Statistics ver. 21.0 was used to calculate the mean and standard deviation. The difference of mean change between groups was verified through an independent t-test. As a result, there were significant differences in resting heart rate, maximal heart rate, maximal systolic blood pressure, and maximal oxygen uptake. However, the maximal systolic blood pressure was found to be an exercise-induced high blood pressure. Thus, it is thought that a risk diagnosis for it through a regular exercise stress test is necessary.
Are all maximally entangled states pure?
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...
Robust utility maximization in a discontinuous filtration
Jeanblanc, Monique; Ngoupeyou, Armand
2012-01-01
We study a problem of utility maximization under model uncertainty with information including jumps. We prove first that the value process of the robust stochastic control problem is described by the solution of a quadratic-exponential backward stochastic differential equation with jumps. Then, we establish a dynamic maximum principle for the optimal control of the maximization problem. The characterization of the optimal model and the optimal control (consumption-investment) is given via a forward-backward system which generalizes the result of Duffie and Skiadas (1994) and El Karoui, Peng and Quenez (2001) in the case of maximization of recursive utilities including model with jumps.
The maximal energy of classes of integral circulant graphs
Sander, Jürgen W
2012-01-01
The energy of a graph is the sum of the moduli of the eigenvalues of its adjacency matrix. We study the energy of integral circulant graphs, also called gcd graphs, which can be characterized by their vertex count $n$ and a set $\\cal D$ of divisors of $n$ in such a way that they have vertex set $\\mathbb{Z}_n$ and edge set ${{a,b}: a,b\\in\\mathbb{Z}_n, \\gcd(a-b,n)\\in {\\cal D}}$. For a fixed prime power $n=p^s$ and a fixed divisor set size $|{\\cal D}| =r$, we analyze the maximal energy among all matching integral circulant graphs. Let $p^{a_1} < p^{a_2} < ... < p^{a_r}$ be the elements of ${\\cal D}$. It turns out that the differences $d_i=a_{i+1}-a_{i}$ between the exponents of an energy maximal divisor set must satisfy certain balance conditions: (i) either all $d_i$ equal $q:=\\frac{s-1}{r-1}$, or at most the two differences $[q]$ and $[q+1]$ may occur; %(for a certain $d$ depending on $r$ and $s$) (ii) there are rules governing the sequence $d_1,...,d_{r-1}$ of consecutive differences. For particular ...
Maximizing throughput by evaluating critical utilization paths
Weeda, P.J.
1991-01-01
Recently the relationship between batch structure, bottleneck machine and maximum throughput has been explored for serial, convergent and divergent process configurations consisting of two machines and three processes. In three of the seven possible configurations a multiple batch structure maximize
Relationship between maximal exercise parameters and individual ...
Relationship between maximal exercise parameters and individual time trial ... It is widely accepted that the ventilatory threshold (VT) is an important ... This study investigated whether the physiological responses during a 20km time trial (TT) ...
Simple technique for maximal thoracic muscle harvest.
Marshall, M Blair; Kaiser, Larry R; Kucharczuk, John C
2004-04-01
We present a modification of technique for standard muscle flap harvest, the placement of cutaneous traction sutures. This technique allows for maximal dissection of the thoracic muscles even through minimal incisions. Through improved exposure and traction, complete dissection of the muscle bed can be performed and the tissue obtained maximized. Because more muscle bulk is obtained with this technique, the need for a second muscle may be prevented.
MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION
Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)
Maximal Subgroups of Skew Linear Groups
M. Mahdavi-Hezavehi
2002-01-01
Let D be an infinite division algebra of finite dimension over its centre Z(D) = F, and n a positive integer. The structure of maximal subgroups of skew linear groups are investigated. In particular, assume N is a normal subgroup of GLn(D) and M is a maximal subgroup of N containing Z(N). It is shown that if M/Z(N) is finite, then N is central.
Additive Approximation Algorithms for Modularity Maximization
Kawase, Yasushi; Matsui, Tomomi; Miyauchi, Atsushi
2016-01-01
The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph $G=(V,E)$, we are asked to find a partition $\\mathcal{C}$ of $V$ that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity max...
Maximal Frequent Itemset Generation Using Segmentation Apporach
M.Rajalakshmi
2011-09-01
Full Text Available Finding frequent itemsets in a data source is a fundamental operation behind Association Rule Mining.Generally, many algorithms use either the bottom-up or top-down approaches for finding these frequentitemsets. When the length of frequent itemsets to be found is large, the traditional algorithms find all thefrequent itemsets from 1-length to n-length, which is a difficult process. This problem can be solved bymining only the Maximal Frequent Itemsets (MFS. Maximal Frequent Itemsets are frequent itemsets whichhave no proper frequent superset. Thus, the generation of only maximal frequent itemsets reduces thenumber of itemsets and also time needed for the generation of all frequent itemsets as each maximal itemsetof length m implies the presence of 2m-2 frequent itemsets. Furthermore, mining only maximal frequentitemset is sufficient in many data mining applications like minimal key discovery and theory extraction. Inthis paper, we suggest a novel method for finding the maximal frequent itemset from huge data sourcesusing the concept of segmentation of data source and prioritization of segments. Empirical evaluationshows that this method outperforms various other known methods.
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits.
Holz, Elisa Mira
2016-01-01
Brain-computer interfaces (BCIs) are devices that translate signals from the brain into control commands for applications. Within the last twenty years, BCI applications have been developed for communication, environmental control, entertainment, and substitution of motor functions. Since BCIs provide muscle independent communication and control of the environment by circumventing motor pathways, they are considered as assistive technologies for persons with neurological and neurodegenerative...
Outer-2-independent domination in graphs
Marcin Krzywkowski; Doost Ali Mojdeh; Maryem Raoofi
2016-02-01
We initiate the study of outer-2-independent domination in graphs. An outer-2-independent dominating set of a graph is a set of vertices of such that every vertex of ()\\ has a neighbor in and the maximum vertex degree of the subgraph induced by ()\\ is at most one. The outer-2-independent domination number of a graph is the minimum cardinality of an outer-2-independent dominating set of . We show that if a graph has minimum degree at least two, then its outer-2-independent domination number equals the number of vertices minus the 2-independence number. Then we investigate the outer-2-independent domination in graphs with minimum degree one. We also prove the Vizing-type conjecture for outer-2-independent domination and disprove the Vizing-type conjecture for outer-connected domination.
Jois, Manjunath Holaykoppa Nanjunda
The conventional Influence Maximization problem is the problem of finding such a team (a small subset) of seed nodes in a social network that would maximize the spread of influence over the whole network. This paper considers a lottery system aimed at maximizing the awareness spread to promote energy conservation behavior as a stochastic Influence Maximization problem with the constraints ensuring lottery fairness. The resulting Multi-Team Influence Maximization problem involves assigning the probabilities to multiple teams of seeds (interpreted as lottery winners) to maximize the expected awareness spread. Such a variation of the Influence Maximization problem is modeled as a Linear Program; however, enumerating all the possible teams is a hard task considering that the feasible team count grows exponentially with the network size. In order to address this challenge, we develop a column generation based approach to solve the problem with a limited number of candidate teams, where new candidates are generated and added to the problem iteratively. We adopt a piecewise linear function to model the impact of including a new team so as to pick only such teams which can improve the existing solution. We demonstrate that with this approach we can solve such influence maximization problems to optimality, and perform computational study with real-world social network data sets to showcase the efficiency of the approach in finding lottery designs for optimal awareness spread. Lastly, we explore other possible scenarios where this model can be utilized to optimally solve the otherwise hard to solve influence maximization problems.
Tseng, Kuo-Wei; Tseng, Wei-Chin; Lin, Ming-Ju; Chen, Hsin-Lian; Nosaka, Kazunori; Chen, Trevor C
2016-01-01
This study investigated whether maximal voluntary isometric contractions (MVIC) performed before maximal eccentric contractions (MaxEC) would attenuate muscle damage of the knee extensors. Untrained men were placed to an experimental group that performed 6 sets of 10 MVIC at 90° knee flexion 2 weeks before 6 sets of 10 MaxEC or a control group that performed MaxEC only (n = 13/group). Changes in muscle damage markers were assessed before to 5 days after each exercise. Small but significant changes in maximal voluntary concentric contraction torque, range of motion (ROM) and plasma creatine kinase (CK) activity were evident at immediately to 2 days post-MVIC (p < 0.05), but other variables (e.g. thigh girth, myoglobin concentration, B-mode echo intensity) did not change significantly. Changes in all variables after MaxEC were smaller (p < 0.05) by 45% (soreness)-67% (CK) for the experimental than the control group. These results suggest that MVIC conferred potent protective effect against MaxEC-induced muscle damage.
Weak incidence algebra and maximal ring of quotients
Surjeet Singh
2004-01-01
Full Text Available Let X, X′ be two locally finite, preordered sets and let R be any indecomposable commutative ring. The incidence algebra I(X,R, in a sense, represents X, because of the well-known result that if the rings I(X,R and I(X′,R are isomorphic, then X and X′ are isomorphic. In this paper, we consider a preordered set X that need not be locally finite but has the property that each of its equivalence classes of equivalent elements is finite. Define I*(X,R to be the set of all those functions f:X×X→R such that f(x,y=0, whenever x⩽̸y and the set Sf of ordered pairs (x,y with x
Independent Innovation Is a Must
无
2006-01-01
The importance of independent innovation has been recognized by the Chinese Government and the public. The 11 th Five-Year Plan (2006-10) sets the enhancement of China's capability of independent innovation and the creation of an innovation-driven growth mode as the national strategy that is required to be integrated into every aspect of society. "Independent innovation" was also a key word in Premier Wen Jiabao's report on government work at the recent session of the National People's Congress, the top ...
The Generalized Scheme-Independent Crewther Relation in QCD
Shen, Jian-Ming; Ma, Yang; Brodsky, Stanley J
2016-01-01
The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD process. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived for conformal theory, provides a remarkable connection between two observables when the $\\beta$ function vanishes. The "Generalized Crewther Relation" relates these two observables for physical QCD with nonzero $\\beta$ function; specifically, it connects the non-singlet Adler function ($D^{\\rm ns}$) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering ($C_{\\rm Bjp}$) at leading twist. A scheme-dependent $\\Delta_{\\rm CSB}$-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renorma...
Welfare-maximizing and revenue-maximizing tariffs with a few domestic firms
Bruno Larue; Jean-Philippe Gervais
2002-01-01
In this paper we compare the orthodox optimal tariff formula with the appropriate welfare-maximizing tariff when there are a few producing or importing firms. The welfare-maximizing tariff can be very low, voire negative in some cases, while in others it can even exceed the maximum-revenue tariff. The relationship between the welfare-maximizing tariff and the number of firms need not be monotonically increasing, because the tariff is not strictly used to internalize terms of trade externality...
Polyploidy Induction of Pteroceltis tatarinowii Maxim
Lin ZHANG; Feng WANG; Zhongkui SUN; Cuicui ZHU; Rongwei CHEN
2015-01-01
3%Objective] This study was conducted to obtain tetraploid Pteroceltis tatari-nowi Maxim. with excel ent ornamental traits. [Method] The stem apex growing points of Pteroceltis tatarinowi Maxim. were treated with different concentrations of colchicine solution for different hours to figure out a proper method and obtain poly-ploids. [Result] The most effective induction was obtained by treatment with 0.6%-0.8% colchicine for 72 h with 34.2% mutation rate. Flow cytometry and chromosome observation of the stem apex growing point of P. tatarinowi Maxim. proved that the tetraploid plants were successful y obtained with chromosome number 2n=4x=36. [Conclusion] The result not only fil s the blank of polyploid breeding of P. tatarinowi , but also provides an effective way to broaden the methods of cultivation of fast-growing, high-quality, disease-resilience, new varieties of Pteroceltis.
The maximal process of nonlinear shot noise
Eliazar, Iddo; Klafter, Joseph
2009-05-01
In the nonlinear shot noise system-model shots’ statistics are governed by general Poisson processes, and shots’ decay-dynamics are governed by general nonlinear differential equations. In this research we consider a nonlinear shot noise system and explore the process tracking, along time, the system’s maximal shot magnitude. This ‘maximal process’ is a stationary Markov process following a decay-surge evolution; it is highly robust, and it is capable of displaying both a wide spectrum of statistical behaviors and a rich variety of random decay-surge sample-path trajectories. A comprehensive analysis of the maximal process is conducted, including its Markovian structure, its decay-surge structure, and its correlation structure. All results are obtained analytically and in closed-form.
Absence of parasympathetic reactivation after maximal exercise.
de Oliveira, Tiago Peçanha; de Alvarenga Mattos, Raphael; da Silva, Rhenan Bartels Ferreira; Rezende, Rafael Andrade; de Lima, Jorge Roberto Perrout
2013-03-01
The ability of the human organism to recover its autonomic balance soon after physical exercise cessation has an important impact on the individual's health status. Although the dynamics of heart rate recovery after maximal exercise has been studied, little is known about heart rate variability after this type of exercise. The aim of this study is to analyse the dynamics of heart rate and heart rate variability recovery after maximal exercise in healthy young men. Fifteen healthy male subjects (21·7 ± 3·4 years; 24·0 ± 2·1 kg m(-2) ) participated in the study. The experimental protocol consisted of an incremental maximal exercise test on a cycle ergometer, until maximal voluntary exhaustion. After the test, recovery R-R intervals were recorded for 5 min. From the absolute differences between peak heart rate values and the heart rate values at 1 and 5 min of the recovery, the heart rate recovery was calculated. Postexercise heart rate variability was analysed from calculations of the SDNN and RMSSD indexes, in 30-s windows (SDNN(30s) and RMSSD(30s) ) throughout recovery. One and 5 min after maximal exercise cessation, the heart rate recovered 34·7 (±6·6) and 75·5 (±6·1) bpm, respectively. With regard to HRV recovery, while the SDNN(30s) index had a slight increase, RMSSD(30s) index remained totally suppressed throughout the recovery, suggesting an absence of vagal modulation reactivation and, possibly, a discrete sympathetic withdrawal. Therefore, it is possible that the main mechanism associated with the fall of HR after maximal exercise is sympathetic withdrawal or a vagal tone restoration without vagal modulation recovery. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.
Maximizing band gaps in plate structures
Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard
2006-01-01
Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated...
Maximizing oil yields may not optimize economics
1987-03-01
The Los Alamos National Laboratory has used the ASPEN computer code to calculate the economics of different hydroretorting conditions. When the oil yield was maximized and a oil shale plant designed around this process, the costs turned out much higher than expected. However, calculations based on runs of less than maximum yields showed lower cost estimates. It is recommended that future efforts should be concentrated on minimizing production costs rather than maximizing yields. An oil shale plant has been designed around minimum production cost, but has not been able to be tested experimentally.
Maximal Inequalities for Dependent Random Variables
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X...
Singularity Structure of Maximally Supersymmetric Scattering Amplitudes
Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy
2014-01-01
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....
Budget Allocation for Maximizing Viral Advertising in Social Networks
Bo-Lei Zhang; Zhu-Zhong Qian; Wen-Zhong Li; Bin Tang; Xiaoming Fu
2016-01-01
Viral advertising in social networks has arisen as one of the most promising ways to increase brand awareness and product sales. By distributing a limited budget, we can incentivize a set of users as initial adopters so that the advertising can start from the initial adopters and spread via social links to become viral. Despite extensive researches in how to target the most influential users, a key issue is often neglected: how to incentivize the initial adopters. In the problem of influence maximization, the assumption is that each user has a fixed cost for being initial adopters, while in practice, user decisions for accepting the budget to be initial adopters are often probabilistic rather than deterministic. In this paper, we study optimal budget allocation in social networks to maximize the spread of viral advertising. In particular, a concave probability model is introduced to characterize each user’s utility for being an initial adopter. Under this model, we show that it is NP-hard to find an optimal budget allocation for maximizing the spread of viral advertising. We then present a novel discrete greedy algorithm with near optimal performance, and further propose scaling-up techniques to improve the time-eﬃciency of our algorithm. Extensive experiments on real-world social graphs are implemented to validate the effectiveness of our algorithm in practice. The results show that our algorithm can outperform other intuitive heuristics significantly in almost all cases.
Improved Algorithms OF CELF and CELF++ for Influence Maximization
Jiaguo Lv
2014-06-01
Full Text Available Motivated by the wide application in some fields, such as viral marketing, sales promotion etc, influence maximization has been the most important and extensively studied problem in social network. However, the most classical KK-Greedy algorithm for influence maximization is inefficient. Two major sources of the algorithm’s inefficiency were analyzed in this paper. With the analysis of algorithms CELF and CELF++, all nodes in the influenced set of u would never bring any marginal gain when a new seed u was produced. Through this optimization strategy, a lot of redundant nodes will be removed from the candidate nodes. Basing on the strategy, two improved algorithms of Lv_CELF and Lv_CELF++ were proposed in this study. To evaluate the two algorithms, the two algorithms with their benchmark algorithms of CELF and CELF++ were conducted on some real world datasets. To estimate the algorithms, influence degree and running time were employed to measure the performance and efficiency respectively. Experimental results showed that, compared with benchmark algorithms of CELF and CELF++, matching effects and higher efficiency were achieved by the new algorithms Lv_CELF and Lv_CELF++. Solutions with the proposed optimization strategy can be useful for the decisionmaking problems under the scenarios related to the influence maximization problem.
Maximizing the Spread of Cascades Using Network Design
Sheldon, Daniel; Elmachtoub, Adam N; Finseth, Ryan; Sabharwal, Ashish; Conrad, Jon; Gomes, Carla P; Shmoys, David; Allen, William; Amundsen, Ole; Vaughan, William
2012-01-01
We introduce a new optimization framework to maximize the expected spread of cascades in networks. Our model allows a rich set of actions that directly manipulate cascade dy- namics by adding nodes or edges to the net- work. Our motivating application is one in spatial conservation planning, where a cas- cade models the dispersal of wild animals through a fragmented landscape. We propose a mixed integer programming (MIP) formu- lation that combines elements from network design and stochastic optimization. Our ap- proach results in solutions with stochastic op- timality guarantees and points to conserva- tion strategies that are fundamentally dier- ent from naive approaches.
Maximizing Team Performance: The Critical Role of the Nurse Leader.
Manges, Kirstin; Scott-Cawiezell, Jill; Ward, Marcia M
2017-01-01
Facilitating team development is challenging, yet critical for ongoing improvement across healthcare settings. The purpose of this exemplary case study is to examine the role of nurse leaders in facilitating the development of a high-performing Change Team in implementing a patient safety initiative (TeamSTEPPs) using the Tuckman Model of Group Development as a guiding framework. The case study is the synthesis of 2.5 years of critical access hospital key informant interviews (n = 50). Critical juncture points related to team development and key nurse leader actions are analyzed, suggesting that nurse leaders are essential to maximize clinical teams' performance. © 2016 Wiley Periodicals, Inc.
Isodiametric sets in the Heisenberg group
Leonardi, Gian Paolo; Rigot, Severine; Vittone, Davide
2010-01-01
In the sub-Riemannian Heisenberg group equipped with its Carnot-Caratheodory metric and with a Haar measure, we consider isodiametric sets, i.e. sets maximizing the measure among all sets with a given diameter. In particular, given an isodiametric set, and up to negligible sets, we prove that its boundary is given by the graphs of two locally Lipschitz functions. Moreover, in the restricted class of rotationally invariant sets, we give a quite complete characterization of any compact (rotatio...
Iterative convergence theorems for maximal monotone operators and relatively nonexpansive mappings
WEI Li; SU Yong-fu; ZHOU Hai-yun
2008-01-01
In this paper, some iterative schemes for approximating the common element of the set of zero points of maximal monotone operators and the set of fixed points of relatively nonexpansive mappings in a real uniformly smooth and uniformly convex Banach space are proposed. Some strong convergence theorems are obtained, to extend the previous work.
Independent histogram pursuit for segmentation of skin lesions
Gomez, D.D.; Butakoff, C.; Ersbøll, Bjarne Kjær;
2008-01-01
In this paper, an unsupervised algorithm, called the Independent Histogram Pursuit (HIP), for segmenting dermatological lesions is proposed. The algorithm estimates a set of linear combinations of image bands that enhance different structures embedded in the image. In particular, the first estima...... to deal with different types of dermatological lesions. The boundary detection precision using k-means segmentation was close to 97%. The proposed algorithm can be easily combined with the majority of classification algorithms.......In this paper, an unsupervised algorithm, called the Independent Histogram Pursuit (HIP), for segmenting dermatological lesions is proposed. The algorithm estimates a set of linear combinations of image bands that enhance different structures embedded in the image. In particular, the first...... estimated combination enhances the contrast of the lesion to facilitate its segmentation. Given an N-band image, this first combination corresponds to a line in N dimensions, such that the separation between the two main modes of the histogram obtained by projecting the pixels onto this line, is maximized...
Cobordism Independence of Grassmann Manifolds
Ashish Kumar Das
2004-02-01
This note proves that, for $F=\\mathbb{R},\\mathbb{C}$ or $\\mathbb{H}$, the bordism classes of all non-bounding Grassmannian manifolds $G_k(F^{n+k})$, with < and having real dimension , constitute a linearly independent set in the unoriented bordism group $\\mathfrak{N}_d$ regarded as a $\\mathbb{Z}_2$-vector space.
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching;
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...
Gradient dynamics and entropy production maximization
Janečka, Adam
2016-01-01
Gradient dynamics describes irreversible evolution by means of a dissipation potential, which leads to several advantageous features like Maxwell--Onsager relations, distinguishing between thermodynamic forces and fluxes or geometrical interpretation of the dynamics. Entropy production maximization is a powerful tool for predicting constitutive relations in engineering. In this paper, both approaches are compared and their shortcomings and advantages are discussed.
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis, E-mail: anis.matoussi@univ-lemans.fr [Université du Maine, Risk and Insurance institut of Le Mans Laboratoire Manceau de Mathématiques (France); Mezghani, Hanen, E-mail: hanen.mezghani@lamsin.rnu.tn; Mnif, Mohamed, E-mail: mohamed.mnif@enit.rnu.tn [University of Tunis El Manar, Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT (Tunisia)
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Maximizing the Motivated Mind for Emergent Giftedness.
Rea, Dan
2001-01-01
This article explains how the theory of the motivated mind conceptualizes the productive interaction of intelligence, creativity, and achievement motivation and how this theory can help educators to maximize students' emergent potential for giftedness. It discusses the integration of cold-order thinking and hot-chaotic thinking into fluid-adaptive…
MAXIMAL ELEMENTS AND EQUILIBRIUM OF ABSTRACT ECONOMY
刘心歌; 蔡海涛
2001-01-01
An existence theorem of maximal elements for a new type of preference correspondences which are Qθ-majorized is given. Then some existence theorems of equilibrium for abstract economy and qualitative game in which the constraint or preference correspondences are Qθ-majorized are obtained in locally convex topological vector spaces.
DNA solution of the maximal clique problem.
Ouyang, Q; Kaplan, P D; Liu, S; Libchaber, A
1997-10-17
The maximal clique problem has been solved by means of molecular biology techniques. A pool of DNA molecules corresponding to the total ensemble of six-vertex cliques was built, followed by a series of selection processes. The algorithm is highly parallel and has satisfactory fidelity. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems.
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main qu
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing throughput in an automated test system
朱君
2007-01-01
@@ Overview This guide is collection of whitepapers designed to help you develop test systems that lower your cost, increase your test throughput, and can scale with future requirements. This whitepaper provides strategies for maximizing system throughput. To download the complete developers guide (120 pages), visit ni. com/automatedtest.
The gaugings of maximal D=6 supergravity
Bergshoeff, E.; Samtleben, H.; Sezgin, E.
2008-01-01
We construct the most general gaugings of the maximal D = 6 supergravity. The theory is ( 2, 2) supersymmetric, and possesses an on-shell SO( 5, 5) duality symmetry which plays a key role in determining its couplings. The field content includes 16 vector fields that carry a chiral spinor representat
WEIGHTED BOUNDEDNESS OF A ROUGH MAXIMAL OPERATOR
无
2000-01-01
In this note the authors give the weighted Lp-boundedness fora class of maximal singular integral operators with rough kernel.The result in this note is an improvement and extension ofthe result obtained by Chen and Lin in 1990.
Maximizing the Range of a Projectile.
Brown, Ronald A.
1992-01-01
Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Testing maximality in muon neutrino flavor mixing
Choubey, S; Choubey, Sandhya; Roy, Probir
2003-01-01
The small difference between the survival probabilities of muon neutrino and antineutrino beams, traveling through earth matter in a long baseline experiment such as MINOS, is shown to be an important measure of any possible deviation from maximality in the flavor mixing of those states.
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
On the Hardy-Littlewood maximal theorem
Shinji Yamashita
1982-01-01
Full Text Available The Hardy-Littlewood maximal theorem is extended to functions of class PL in the sense of E. F. Beckenbach and T. Radó, with a more precise expression of the absolute constant in the inequality. As applications we deduce some results on hyperbolic Hardy classes in terms of the non-Euclidean hyperbolic distance in the unit disk.
Maximal Cartel Pricing and Leniency Programs
Houba, H.E.D.; Motchenkova, E.; Wen, Q.
2008-01-01
For a general class of oligopoly models with price competition, we analyze the impact of ex-ante leniency programs in antitrust regulation on the endogenous maximal-sustainable cartel price. This impact depends upon industry characteristics including its cartel culture. Our analysis disentangles the
How to Generate Good Profit Maximization Problems
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Maximally entangled mixed states made easy
Aiello, A; Voigt, D; Woerdman, J P
2006-01-01
We show that, contrarily to a recent claim [M. Ziman and V. Bu\\v{z}ek, Phys. Rev. A. \\textbf{72}, 052325 (2005)], it is possible to achieve maximally entangled mixed states of two qubits from the singlet state via the action of local nonunital quantum channels. Moreover, we present a simple, feasible linear optical implementation of one of such channels.
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing scientific knowledge from randomized clinical trials
Gustafsson, Finn; Atar, Dan; Pitt, Bertram
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...
Maximal Heat Generation in Nanoscale Systems
ZHOU Li-Ling; LI Shu-Shen; ZENG Zhao-Yang
2009-01-01
We investigate the heat generation in a nanoscale system coupled to normal leads and find that it is maximal when the average occupation of the electrons in the nanoscale system is 0.5,no matter what mechanism induces the heat generation.
Understanding violations of Gricean maxims in preschoolers and adults.
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Understanding Violations of Gricean Maxims in Preschoolers and Adults
Mako eOkanda
2015-07-01
Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
谢胜利; 李名标
2011-01-01
分析独立学院离散数学的教学现状,本着实用够用的原则对教学内容进行较大的增删修改,并对教学方法作了相应的改进,提出在教学中要强调课程的应用性、增加实验教学环节、强调自主学习等措施。经过一个学期的教学实践,取得了较好的教学效果。%This paper analyzes the teaching status of discrete mathematics of the independent college,we carried out large additions,deletions and modifications to the teaching content according to the principles of practical and enough utility,and improved the teaching methods.Then,we take some measures in the teaching： emphasize the applicability of courses,add the experimental teaching,and stress self-learning.After several years,we achieved good teaching results.
Weiss, I.
2007-01-01
The thesis introduces the new concept of dendroidal set. Dendroidal sets are a generalization of simplicial sets that are particularly suited to the study of operads in the context of homotopy theory. The relation between operads and dendroidal sets is established via the dendroidal nerve functor wh
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2011-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Dopaminergic balance between reward maximization and policy complexity
Naama eParush
2011-05-01
Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.
Mining Maximal Frequent Patterns in a Unidirectional FP-tree
SONG Jing-jing; LIU Rui-xin; WANG Yan; JIANG Bao-qing
2006-01-01
Becausemining complete set of frequent patterns from dense database could be impractical, an interesting alternative has been proposed recently. Instead of mining the complete set of frequent patterns, the new model only finds out the maximal frequent patterns, which can generate all frequent patterns. FP-growth algorithm is one of the most efficient frequent-pattern mining methods published so far. However,because FP-tree and conditional FP-trees must be two-way traversable, a great deal memory is needed in process of mining. This paper proposes an efficient algorithm Unid_FP-Max for mining maximal frequent patterns based on unidirectional FP-tree. Because of generation method of unidirectional FP-tree and conditional unidirectional FP-trees, the algorithm reduces the space consumption to the fullest extent. With the development of two techniques:single path pruning and header table pruning which can cut down many conditional unidirectional FP-trees generated recursively in mining process, Unid_ FP-Max further lowers the expense of time and space.
Optimal deployment of resources for maximizing impact in spreading processes.
Lokhov, Andrey Y; Saad, David
2017-09-26
The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of "influential spreaders" for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distribution of available resources hence results from an interplay between network topology and spreading dynamics. We show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples.
Optimal Deployment of Resources for Maximizing Impact in Spreading Processes
Lokhov, Andrey Y
2016-01-01
The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of "influential spreaders" for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings the problem is often characterized by heterogeneous interactions and requires interventions over a finite time window via a restricted set of controllable nodes. The optimal distribution of available resources hence results from an interplay between network topology and spreading dynamics. We show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on...
Rousanoglou, Elissavet N; Oskouei, Ali E; Herzog, Walter
2007-01-01
Mechanical properties of skeletal muscles are often studied for controlled, electrically induced, maximal, or supra-maximal contractions. However, many mechanical properties, such as the force-length relationship and force enhancement following active muscle stretching, are quite different for maximal and sub-maximal, or electrically induced and voluntary contractions. Force depression, the loss of force observed following active muscle shortening, has been observed and is well documented for electrically induced and maximal voluntary contractions. Since sub-maximal voluntary contractions are arguably the most important for everyday movement analysis and for biomechanical models of skeletal muscle function, it is important to study force depression properties under these conditions. Therefore, the purpose of this study was to examine force depression following sub-maximal, voluntary contractions. Sets of isometric reference and isometric-shortening-isometric test contractions at 30% of maximal voluntary effort were performed with the adductor pollicis muscle. All reference and test contractions were executed by controlling force or activation using a feedback system. Test contractions included adductor pollicis shortening over 10 degrees, 20 degrees, and 30 degrees of thumb adduction. Force depression was assessed by comparing the steady-state isometric forces (activation control) or average electromyograms (EMGs) (force control) following active muscle shortening with those obtained in the corresponding isometric reference contractions. Force was decreased by 20% and average EMG was increased by 18% in the shortening test contractions compared to the isometric reference contractions. Furthermore, force depression was increased with increasing shortening amplitudes, and the relative magnitudes of force depression were similar to those found in electrically stimulated and maximal contractions. We conclude from these results that force depression occurs in sub-maximal
U. Platt
2011-12-01
Full Text Available We present aerosol and trace gas profiles derived from MAX-DOAS observations. Our inversion scheme is based on simple profile parameterisations used as input for an atmospheric radiative transfer model (forward model. From a least squares fit of the forward model to the MAX-DOAS measurements, two profile parameters are retrieved including integrated quantities (aerosol optical depth or trace gas vertical column density, and parameters describing the height and shape of the respective profiles. From these results, the aerosol extinction and trace gas mixing ratios can also be calculated. We apply the profile inversion to MAX-DOAS observations during a measurement campaign in Milano, Italy, September 2003, which allowed simultaneous observations from three telescopes (directed to north, west, south. Profile inversions for aerosols and trace gases were possible on 23 days. Especially in the middle of the campaign (17–20 September 2003, enhanced values of aerosol optical depth and NO2 and HCHO mixing ratios were found. The retrieved layer heights were typically similar for HCHO and aerosols. For NO2, lower layer heights were found, which increased during the day. The MAX-DOAS inversion results are compared to independent measurements: (1 aerosol optical depth measured at an AERONET station at Ispra; (2 near-surface NO2 and HCHO (formaldehyde mixing ratios measured by long path DOAS and Hantzsch instruments at Bresso; (3 vertical profiles of HCHO and aerosols measured by an ultra light aircraft. Depending on the viewing direction, the aerosol optical depths from MAX-DOAS are either smaller or larger than those from AERONET observations. Similar comparison results are found for the MAX-DOAS NO2 mixing ratios versus long path DOAS measurements. In contrast, the MAX-DOAS HCHO mixing ratios are generally higher than those from long path DOAS or Hantzsch instruments. The comparison of the HCHO and aerosol profiles from the aircraft showed reasonable
X. Li
2011-06-01
Full Text Available We present aerosol and trace gas profiles derived from MAX-DOAS observations. Our inversion scheme is based on simple profile parameterisations used as input for an atmospheric radiative transfer model (forward model. From a least squares fit of the forward model to the MAX-DOAS measurements, two profile parameters are retrieved including integrated quantities (aerosol optical depth or trace gas vertical column density, and parameters describing the height and shape of the respective profiles. From these results, the aerosol extinction and trace gas mixing ratios can also be calculated. We apply the profile inversion to MAX-DOAS observations during a measurement campaign in Milano, Italy, September 2003, which allowed simultaneous observations from three telescopes (directed to north, west, south. Profile inversions for aerosols and trace gases were possible on 23 days. Especially in the middle of the campaign (17–20 September 2003, enhanced values of aerosol optical depth and NO2 and HCHO mixing ratios were found. The retrieved layer heights were typically similar for HCHO and aerosols. For NO2, lower layer heights were found, which increased during the day. The MAX-DOAS inversion results are compared to independent measurements: (1 aerosol optical depth measured at an AERONET station at Ispra; (2 near-surface NO2 and HCHO (formaldehyde mixing ratios measured by long path DOAS and Hantzsch instruments at Bresso; (3 vertical profiles of HCHO and aerosols measured by an ultra light aircraft. Depending on the viewing direction, the aerosol optical depths from MAX-DOAS are either smaller or larger than those from AERONET observations. Similar comparison results are found for the MAX-DOAS NO2 mixing ratios versus long path DOAS measurements. In contrast, the MAX-DOAS HCHO mixing ratios are generally higher than those from long path DOAS or Hantzsch instruments. The comparison of the HCHO and aerosol profiles from the aircraft showed reasonable
A New Algorithm to Optimize Maximal Information Coefficient.
Yuan Chen
Full Text Available The maximal information coefficient (MIC captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC.
Systemic consultation and goal setting
Carr, Alan
1993-01-01
Over two decades of empirical research conducted within a positivist framework has shown that goal setting is a particularly useful method for influencing task performance in occupational and industrial contexts. The conditions under which goal setting is maximally effective are now clearly established. These include situations where there is a high level of acceptance and commitment, where goals are specific and challenging, where the task is relatively simple rather than ...
Maximal sfermion flavour violation in super-GUTs
Ellis, John [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Olive, Keith A. [CERN, Theoretical Physics Department, Geneva (Switzerland); University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States); Velasco-Sevilla, L. [University of Bergen, Department of Physics and Technology, PO Box 7803, Bergen (Norway)
2016-10-15
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m{sub 0} specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m{sub 1/2}, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m{sub 1/2} and generation independent. In this case, the input scalar masses m{sub 0} may violate flavour maximally, a scenario we call MaxSFV, and there is no supersymmetric flavour problem. We illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity. (orig.)
Maximal sfermion flavour violation in super-GUTs
AUTHOR|(CDS)2108556; Velasco-Sevilla, Liliana
2016-01-01
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses $m_0$ specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses $m_{1/2}$, as is expected in no-scale models, the dominant effects of renormalization between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to $m_{1/2}$ and generation-independent. In this case, the input scalar masses $m_0$ may violate flavour maximally, a scenario we call MaxFV, and there is no supersymmetric flavour problem. We illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity.
Outage Constrained Secrecy Rate Maximization Using Cooperative Jamming
Luo, Shuangyu; Petropulu, Athina
2012-01-01
We consider a Gaussian MISO wiretap channel, where a multi-antenna source communicates with a single-antenna destination in the presence of a single-antenna eavesdropper. The communication is assisted by multi-antenna helpers that act as jammers to the eavesdropper. Each helper independently transmits noise which lies in the null space of the channel to the destination, thus creates no interference to the destination. Under the assumption that there is eavesdropper channel uncertainty, we derive the optimal covariance matrix for the source signal so that the secrecy rate is maximized subject to probability of outage and power constraints. Assuming that the eavesdropper channels follow zero-mean Gaussian model with known covariances, we derive the outage probability in a closed form. Simulation results in support of the analysis are provided.
Measurable Maximal Energy and Minimal Time Interval
Dahab, Eiman Abou El
2014-01-01
The possibility of finding the measurable maximal energy and the minimal time interval is discussed in different quantum aspects. It is found that the linear generalized uncertainty principle (GUP) approach gives a non-physical result. Based on large scale Schwarzshild solution, the quadratic GUP approach is utilized. The calculations are performed at the shortest distance, at which the general relativity is assumed to be a good approximation for the quantum gravity and at larger distances, as well. It is found that both maximal energy and minimal time have the order of the Planck time. Then, the uncertainties in both quantities are accordingly bounded. Some physical insights are addressed. Also, the implications on the physics of early Universe and on quantized mass are outlined. The results are related to the existence of finite cosmological constant and minimum mass (mass quanta).
Maximal temperature in a simple thermodynamical system
Dai, De-Chang
2016-01-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Predicting Contextual Sequences via Submodular Function Maximization
Dey, Debadeepta; Hebert, Martial; Bagnell, J Andrew
2012-01-01
Sequence optimization, where the items in a list are ordered to maximize some reward has many applications such as web advertisement placement, search, and control libraries in robotics. Previous work in sequence optimization produces a static ordering that does not take any features of the item or context of the problem into account. In this work, we propose a general approach to order the items within the sequence based on the context (e.g., perceptual information, environment description, and goals). We take a simple, efficient, reduction-based approach where the choice and order of the items is established by repeatedly learning simple classifiers or regressors for each "slot" in the sequence. Our approach leverages recent work on submodular function maximization to provide a formal regret reduction from submodular sequence optimization to simple cost-sensitive prediction. We apply our contextual sequence prediction algorithm to optimize control libraries and demonstrate results on two robotics problems: ...
Nonlinear trading models through Sharpe Ratio maximization.
Choey, M; Weigend, A S
1997-08-01
While many trading strategies are based on price prediction, traders in financial markets are typically interested in optimizing risk-adjusted performance such as the Sharpe Ratio, rather than the price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can indeed be achieved with this nonlinear approach.
Maximally Symmetric Spacetimes emerging from thermodynamic fluctuations
Bravetti, A; Quevedo, H
2015-01-01
In this work we prove that the maximally symmetric vacuum solutions of General Relativity emerge from the geometric structure of statistical mechanics and thermodynamic fluctuation theory. To present our argument, we begin by showing that the pseudo-Riemannian structure of the Thermodynamic Phase Space is a solution to the vacuum Einstein-Gauss-Bonnet theory of gravity with a cosmological constant. Then, we use the geometry of equilibrium thermodynamics to demonstrate that the maximally symmetric vacuum solutions of Einstein's Field Equations -- Minkowski, de-Sitter and Anti-de-Sitter spacetimes -- correspond to thermodynamic fluctuations. Moreover, we argue that these might be the only possible solutions that can be derived in this manner. Thus, the results presented here are the first concrete examples of spacetimes effectively emerging from the thermodynamic limit over an unspecified microscopic theory without any further assumptions.
Consistent 4-form fluxes for maximal supergravity
Godazgar, Hadi; Krueger, Olaf; Nicolai, Hermann
2015-01-01
We derive new ansaetze for the 4-form field strength of D=11 supergravity corresponding to uplifts of four-dimensional maximal gauged supergravity. In particular, the ansaetze directly yield the components of the 4-form field strength in terms of the scalars and vectors of the four-dimensional maximal gauged supergravity---in this way they provide an explicit uplift of all four-dimensional consistent truncations of D=11 supergravity. The new ansaetze provide a substantially simpler method for uplifting d=4 flows compared to the previously available method using the 3-form and 6-form potential ansaetze. The ansatz for the Freund-Rubin term allows us to conjecture a `master formula' for the latter in terms of the scalar potential of d=4 gauged supergravity and its first derivative. We also resolve a long-standing puzzle concerning the antisymmetry of the flux obtained from uplift ansaetze.
Utility maximization in incomplete markets with default
Lim, Thomas
2008-01-01
We adress the maximization problem of expected utility from terminal wealth. The special feature of this paper is that we consider a financial market where the price process of risky assets can have a default time. Using dynamic programming, we characterize the value function with a backward stochastic differential equation and the optimal portfolio policies. We separately treat the cases of exponential, power and logarithmic utility.
Operational Modal Analysis using Expectation Maximization Algorithm
Cara Cañas, Francisco Javier; Carpio Huertas, Jaime; Juan Ruiz, Jesús; Alarcón Álvarez, Enrique
2011-01-01
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method...
Revenue Maximizing Head Starts in Contests
Franke, Jörg; Leininger, Wolfgang; Wasser, Cédric
2014-01-01
We characterize revenue maximizing head starts for all-pay auctions and lottery contests with many heterogeneous players. We show that under optimal head starts all-pay auctions revenue-dominate lottery contests for any degree of heterogeneity among players. Moreover, all-pay auctions with optimal head starts induce higher revenue than any multiplicatively biased all-pay auction or lottery contest. While head starts are more effective than multiplicative biases in all-pay auctions, they are l...
Maximal supersymmetry and B-mode targets
Kallosh, Renata; Linde, Andrei; Wrase, Timm; Yamada, Yusuke
2017-04-01
Extending the work of Ferrara and one of the authors [1], we present dynamical cosmological models of α-attractors with plateau potentials for 3 α = 1, 2, 3, 4, 5, 6, 7. These models are motivated by geometric properties of maximally supersymmetric theories: M-theory, superstring theory, and maximal N = 8 supergravity. After a consistent truncation of maximal to minimal supersymmetry in a seven-disk geometry, we perform a two-step procedure: 1) we introduce a superpotential, which stabilizes the moduli of the seven-disk geometry in a supersymmetric minimum, 2) we add a cosmological sector with a nilpotent stabilizer, which breaks supersymmetry spontaneously and leads to a desirable class of cosmological attractor models. These models with n s consistent with observational data, and with tensor-to-scalar ratio r ≈ 10-2 - 10-3, provide natural targets for future B-mode searches. We relate the issue of stability of inflationary trajectories in these models to tessellations of a hyperbolic geometry.
Maximal respiratory pressures among adolescent swimmers.
Rocha Crispino Santos, M A; Pinto, M L; Couto Sant'Anna, C; Bernhoeft, M
2011-01-01
Maximal inspiratory pressures (MIP) and maximal expiratory pressures (MEP) are useful indices of respiratory muscle strength in athletes. The aims of this study were: to describe the strength of the respiratory muscles of Olympic junior swim team, at baseline and after a standard physical training; and to determine if there is a differential inspiratory and expiratory pressure response to the physical training. A cross-sectional study evaluated 28 international-level swimmers with ages ranging from 15 to 17 years, 19 (61 %) being males. At baseline, MIP was found to be lower in females (P = .001). The mean values reached by males and females were: MIP(cmH2O) = M: 100.4 (± 26.5)/F: 67.8 (± 23.2); MEP (cmH2O) = M: 87.4 (± 20.7)/F: 73.9 (± 17.3). After the physical training they reached: MIP (cmH2O) = M: 95.3 (± 30.3)/F: 71.8 (± 35.6); MEP (cmH2O) = M: 82.8 (± 26.2)/F: 70.4 (± 8.3). No differential pressure responses were observed in either males or females. These results suggest that swimmers can sustain the magnitude of the initial maximal pressures. Other studies should be developed to clarify if MIP and MEP could be used as a marker of an athlete's performance.
General conditions for maximal violation of non-contextuality in discrete and continuous variables
Laversanne-Finot, A.; Ketterer, A.; Barros, M. R.; Walborn, S. P.; Coudreau, T.; Keller, A.; Milman, P.
2017-04-01
The contextuality of quantum mechanics can be shown by the violation of inequalities based on measurements of well chosen observables. An important property of such observables is that their expectation value can be expressed in terms of probabilities for obtaining two exclusive outcomes. Examples of such inequalities have been constructed using either observables with a dichotomic spectrum or using periodic functions obtained from displacement operators in phase space. Here we identify the general conditions on the spectral decomposition of observables demonstrating state independent contextuality of quantum mechanics. Our results not only unify existing strategies for maximal violation of state independent non-contextuality inequalities but also lead to new scenarios enabling such violations. Among the consequences of our results is the impossibility of having a state independent maximal violation of non-contextuality in the Peres–Mermin scenario with discrete observables of odd dimensions.
Cardiorespiratory Coordination in Repeated Maximal Exercise
Sergi Garcia-Retortillo
2017-06-01
Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC
Cardiorespiratory Coordination in Repeated Maximal Exercise.
Garcia-Retortillo, Sergi; Javierre, Casimiro; Hristovski, Robert; Ventura, Josep L; Balagué, Natàlia
2017-01-01
Increases in cardiorespiratory coordination (CRC) after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate) was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1) were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax), maximal oxygen consumption (VO2 max), or ventilatory threshold (VT), an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08) was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43) in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT) between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC evaluation in
最大泛化规则生成%Generation of Maximally Generalized Rules
徐如燕; 鲁汉榕; 郭齐胜
2001-01-01
In this paper,the generation of maximally generalized rules in the course of classitication knowledge discovery based on rough sets theory is discussed. Firstly, an algorithm is introduced. Secondly,we propose that the information-based J-measure is used as another measure of attribute signifi cance value. This measure is used for heuristically selecting the conditions to be removed in the process of extracting a set of maximally generalized rules. Finally,we present an example to illustrate the process of the algorithm.
An information-theoretic analysis of return maximization in reinforcement learning.
Iwata, Kazunori
2011-12-01
We present a general analysis of return maximization in reinforcement learning. This analysis does not require assumptions of Markovianity, stationarity, and ergodicity for the stochastic sequential decision processes of reinforcement learning. Instead, our analysis assumes the asymptotic equipartition property fundamental to information theory, providing a substantially different view from that in the literature. As our main results, we show that return maximization is achieved by the overlap of typical and best sequence sets, and we present a class of stochastic sequential decision processes with the necessary condition for return maximization. We also describe several examples of best sequences in terms of return maximization in the class of stochastic sequential decision processes, which satisfy the necessary condition.
Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo
2017-09-27
We aimed to clarify the mechanical determinants of sprinting performance during acceleration and maximal speed phases of a single sprint, using ground reaction forces (GRFs). While 18 male athletes performed a 60-m sprint, GRF was measured at every step over a 50-m distance from the start. Variables during the entire acceleration phase were approximated with a fourth-order polynomial. Subsequently, accelerations at 55%, 65%, 75%, 85%, and 95% of maximal speed, and running speed during the maximal speed phase were determined as sprinting performance variables. Ground reaction impulses and mean GRFs during the acceleration and maximal speed phases were selected as independent variables. Stepwise multiple regression analysis selected propulsive and braking impulses as contributors to acceleration at 55%-95% (β > 0.724) and 75%-95% (β > 0.176), respectively, of maximal speed. Moreover, mean vertical force was a contributor to maximal running speed (β = 0.481). The current results demonstrate that exerting a large propulsive force during the entire acceleration phase, suppressing braking force when approaching maximal speed, and producing a large vertical force during the maximal speed phase are essential for achieving greater acceleration and maintaining higher maximal speed, respectively.
Explicit Analysis of Creating Maximally Entangled State in the Mott Insulator State
LI Min-Si; TIAN Li-Jun; ZHANG Hong-Biao
2004-01-01
@@ We clarify the essence of the method proposed by You (Phys. Rev. Lett. 90 (2004) 030402) to create the maximally entangled atomic N-GHZ state in the Mott insulator state. Based on the time-independent perturbation theory,we find that the validity of the method can be summarized as that the Hamiltonian governing the evolution is approximately equivalent to the type aJ2x + bJx, which is the well known form used to create the maximally entangled state.
Parton distributions based on a maximally consistent dataset
Rojo, Juan
2014-01-01
The choice of data that enters a global QCD analysis can have a substantial impact on the resulting parton distributions and their predictions for collider observables. One of the main reasons for this has to do with the possible presence of inconsistencies, either internal within an experiment or external between different experiments. In order to assess the robustness of the global fit, different definitions of a conservative PDF set, that is, a PDF set based on a maximally consistent dataset, have been introduced. However, these approaches are typically affected by theory biases in the selection of the dataset. In this contribution, after a brief overview of recent NNPDF developments, we propose a new, fully objective, definition of a conservative PDF set, based on the Bayesian reweighting approach. Using the new NNPDF3.0 framework, we produce various conservative sets, which turn out to be mutually in agreement within the respective PDF uncertainties, as well as with the global fit. We explore some of the...
Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L
2015-05-01
This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides.
Postactivation Potentiation Biases Maximal Isometric Strength Assessment
Leonardo Coelho Rabello Lima
2014-01-01
Full Text Available Postactivation potentiation (PAP is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs. The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n=23 performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT, time to achieve it (tPTI, contractile impulse (CI, root mean square of the electromyographic signal during PTI (RMS, and rate of torque development (RTD, in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m, RTD (746 ± 152 N·m·s−1versus 727 ± 158 N·m·s−1, and RMS (59.1 ± 12.2% RMSMAX versus 54.8 ± 9.4% RMSMAX were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms. We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...... on the numbers of cycles in graphs depending on numbers of vertices and edges, girth, and homomorphisms to small fixed graphs; and use the bounds to show that among regular graphs, the conjecture holds. We also consider graphs that are close to being regular, with the minimum and maximum degrees differing...
Understanding of English Contracts though Relation Maxims
XU Chi-ying; JIANG Li-hui
2013-01-01
Contract is the legal evidence of the concerning parties of business. And this lead to its unique characteristics:technical terms, archaism, borrowed words, juxtaposition, and abbreviation. The understanding of contracts is of vital importance for each party, because it concerns the share of interests. In order to avoid ambiguity that some words or sentence in English contracts may lead to, and achieve“best relevance and least effort”of communication, this paper, by applying relation maxim, deeply analyze how to understand English contracts though selection of words, modification, the complexity and simplicity of sentence.
Maximizing results in reconstruction of cheek defects.
Mureau, Marc A M; Hofer, Stefan O P
2009-07-01
The face is exceedingly important, as it is the medium through which individuals interact with the rest of society. Reconstruction of cheek defects after trauma or surgery is a continuing challenge for surgeons who wish to reliably restore facial function and appearance. Important in aesthetic facial reconstruction are the aesthetic unit principles, by which the face can be divided in central facial units (nose, lips, eyelids) and peripheral facial units (cheeks, forehead, chin). This article summarizes established options for reconstruction of cheek defects and provides an overview of several modifications as well as tips and tricks to avoid complications and maximize aesthetic results.
Maximizing policy learning in international committees
Nedergaard, Peter
2007-01-01
, this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...
Independent EEG sources are dipolar.
Arnaud Delorme
Full Text Available Independent component analysis (ICA and blind source separation (BSS methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR effected by each decomposition, and decomposition 'dipolarity' defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA; best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison.
Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn [RaySearch Laboratories, Sveavägen 44, Stockholm SE-111 34 (Sweden); Forsgren, Anders [Optimization and Systems Theory, Department of Mathematics, KTH Royal Institute of Technology, Stockholm SE-100 44 (Sweden)
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goals to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.
Maximally Supersymmetric Planar Yang-Mills Amplitudes at Five Loops
Bern, Z; Johansson, H; Kosower, D A
2007-01-01
We present an ansatz for the planar five-loop four-point amplitude in maximally supersymmetric Yang-Mills theory in terms of loop integrals. This ansatz exploits the recently observed correspondence between integrals with simple conformal properties and those found in the four-point amplitudes of the theory through four loops. We explain how to identify all such integrals systematically. We make use of generalized unitarity in both four and D dimensions to determine the coefficients of each of these integrals in the amplitude. Maximal cuts, in which we cut all propagators of a given integral, are an especially effective means for determining these coefficients. The set of integrals and coefficients determined here will be useful for computing the five-loop cusp anomalous dimension of the theory which is of interest for non-trivial checks of the AdS/CFT duality conjecture. It will also be useful for checking a conjecture that the amplitudes have an iterative structure allowing for their all-loop resummation, w...
Maximizing Information Diffusion in the Cyber-physical Integrated Network
Hongliang Lu
2015-11-01
Full Text Available Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks.
Maximizing Information Diffusion in the Cyber-physical Integrated Network.
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-11-11
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks.
On the Furthest Hyperplane Problem and Maximal Margin Clustering
Liberty, Edo; Weinstein, Omri
2011-01-01
This paper introduces the Furthest Hyperplane Problem (FHP). Given a set of $n$ points in $\\R^d$, the objective is to produce the hyperplane (through the origin) which maximizes the separation margin, that is, the minimal distance between the hyperplane and an input point. We prove that FHP is NP-hard to approximate to within some small (multiplicative) constant, by presenting a gap preserving reduction from a particular version of the PCP theorem. We also present an algorithm which runs in time $O(n^{\\tilde{O}(1/\\theta^2)})$ where $\\theta$ is the optimal margin. It is based on a dimension reduction technique combined with an $\\eps$-net argument in the reduced dimension. As a consequence, we obtain the first polynomial time algorithm for Maximal Margin Clustering (MMC), which is the unsupervised counterpart of Support Vector Machines (SVM), for the case where the margin is a constant factor of the point cloud diameter. Indeed, this is our main motivation. Our algorithm's running time dependance on the margin ...
Maximal subbundles, quot schemes, and curve counting
Gillam, W D
2011-01-01
Let $E$ be a rank 2, degree $d$ vector bundle over a genus $g$ curve $C$. The loci of stable pairs on $E$ in class $2[C]$ fixed by the scaling action are expressed as products of $\\Quot$ schemes. Using virtual localization, the stable pairs invariants of $E$ are related to the virtual intersection theory of $\\Quot E$. The latter theory is extensively discussed for an $E$ of arbitrary rank; the tautological ring of $\\Quot E$ is defined and is computed on the locus parameterizing rank one subsheaves. In case $E$ has rank 2, $d$ and $g$ have opposite parity, and $E$ is sufficiently generic, it is known that $E$ has exactly $2^g$ line subbundles of maximal degree. Doubling the zero section along such a subbundle gives a curve in the total space of $E$ in class $2[C]$. We relate this count of maximal subbundles with stable pairs/Donaldson-Thomas theory on the total space of $E$. This endows the residue invariants of $E$ with enumerative significance: they actually \\emph{count} curves in $E$.
Maximal coherence in a generic basis
Yao, Yao; Dong, G. H.; Ge, Li; Li, Mo; Sun, C. P.
2016-12-01
Since quantum coherence is an undoubted characteristic trait of quantum physics, the quantification and application of quantum coherence has been one of the long-standing central topics in quantum information science. Within the framework of a resource theory of quantum coherence proposed recently, a fiducial basis should be preselected for characterizing the quantum coherence in specific circumstances, namely, the quantum coherence is a basis-dependent quantity. Therefore, a natural question is raised: what are the maximum and minimum coherences contained in a certain quantum state with respect to a generic basis? While the minimum case is trivial, it is not so intuitive to verify in which basis the quantum coherence is maximal. Based on the coherence measure of relative entropy, we indicate the particular basis in which the quantum coherence is maximal for a given state, where the Fourier matrix (or more generally, complex Hadamard matrices) plays a critical role in determining the basis. Intriguingly, though we can prove that the basis associated with the Fourier matrix is a stationary point for optimizing the l1 norm of coherence, numerical simulation shows that it is not a global optimal choice.
Symmetry and approximability of submodular maximization problems
Vondrak, Jan
2011-01-01
A number of recent results on optimization problems involving submodular functions have made use of the multilinear relaxation of the problem. These results hold typically in the value oracle model, where the objective function is accessible via a black box returning f(S) for a given S. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of symmetry gap. Our main result is that for any fixed instance that exhibits a certain symmetry gap in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap would require exponentially many oracle queries. This unifies several known hardness results for submodular maximization, and implies several new ones. In particular, we prove that there is no constant-factor approximation for the problem of maximizing a non-negative submodular function over the bases of a matroid. We also provide a closely matching approximation algorithm for...
Vasile DEDU
2012-08-01
Full Text Available In this paper we present the key aspects regarding central bank’s independence. Most economists consider that the factor which positively influences the efficiency of monetary policy measures is the high independence of the central bank. We determined that the National Bank of Romania (NBR has a high degree of independence. NBR has both goal and instrument independence. We also consider that the hike of NBR’s independence played an important role in the significant disinflation process, as headline inflation dropped inside the targeted band of 3% ± 1 percentage point recently.
Does central bank independence still matter?
de Haan, Jakob; Masciandaro, Donato; Quintyn, Marc
2008-01-01
This paper sets out background on the literature on central bank independence (CBI) and summarizes the contributions of the papers in this special issue. The clear impression is that the answer to the question "Does central bank independence still matter?" is affirmative. (C) 2008 Elsevier B.V. All
Image Segmentation by Discounted Cumulative Ranking on Maximal Cliques
Carreira, Joao; Sminchisescu, Cristian
2010-01-01
We propose a mid-level image segmentation framework that combines multiple figure-ground hypothesis (FG) constrained at different locations and scales, into interpretations that tile the entire image. The problem is cast as optimization over sets of maximal cliques sampled from the graph connecting non-overlapping, putative figure-ground segment hypotheses. Potential functions over cliques combine unary Gestalt-based figure quality scores and pairwise compatibilities among spatially neighboring segments, constrained by T-junctions and the boundary interface statistics resulting from projections of real 3d scenes. Learning the model parameters is formulated as rank optimization, alternating between sampling image tilings and optimizing their potential function parameters. State of the art results are reported on both the Berkeley and the VOC2009 segmentation dataset, where a 28% improvement was achieved.
Beyond "utilitarianism": maximizing the clinical impact of moral judgment research.
Rosas, Alejandro; Koenigs, Michael
2014-01-01
The use of hypothetical moral dilemmas--which pit utilitarian considerations of welfare maximization against emotionally aversive "personal" harms--has become a widespread approach for studying the neuropsychological correlates of moral judgment in healthy subjects, as well as in clinical populations with social, cognitive, and affective deficits. In this article, we propose that a refinement of the standard stimulus set could provide an opportunity to more precisely identify the psychological factors underlying performance on this task, and thereby enhance the utility of this paradigm for clinical research. To test this proposal, we performed a re-analysis of previously published moral judgment data from two clinical populations: neurological patients with prefrontal brain damage and psychopathic criminals. The results provide intriguing preliminary support for further development of this assessment paradigm.
Tetrahedral meshing via maximal Poisson-disk sampling
Guo, Jianwei
2016-02-15
In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.
Accurate and efficient maximal ball algorithm for pore network extraction
Arand, Frederick; Hesser, Jürgen
2017-04-01
The maximal ball (MB) algorithm is a well established method for the morphological analysis of porous media. It extracts a network of pores and throats from volumetric data. This paper describes structural modifications to the algorithm, while the basic concepts are preserved. Substantial improvements to accuracy and efficiency are achieved as follows: First, all calculations are performed on a subvoxel accurate distance field, and no approximations to discretize balls are made. Second, data structures are simplified to keep memory usage low and improve algorithmic speed. Third, small and reasonable adjustments increase speed significantly. In volumes with high porosity, memory usage is improved compared to classic MB algorithms. Furthermore, processing is accelerated more than three times. Finally, the modified MB algorithm is verified by extracting several network properties from reference as well as real data sets. Runtimes are measured and compared to literature.
A new approximate proximal point algorithm for maximal monotone operator
HE; Bingsheng(何炳生); LIAO; Lizhi(廖立志); YANG; Zhenhua(杨振华)
2003-01-01
The problem concerned in this paper is the set-valued equation 0 ∈ T(z) where T is a maximal monotone operator. For given xk and βk ＞ 0, some existing approximate proximal point algorithms take xk+1 = xk such that xk +ek∈ xk + βkT(xk) and||ek|| ≤ηk||xk - xk||, where {ηk} is a non-negative summable sequence. Instead of xk+1 = xk, the new iterate of the proposing method is given by xk+1 = PΩ[xk - ek], where Ω is the domain of T and PΩ(@) denotes the projection on Ω. The convergence is proved under a significantly relaxed restriction supk＞0 ηk ＜ 1.
Independent and dominating sets in wireless communication graphs
Nieberg, Tim
2006-01-01
Wireless ad hoc networks are advancing rapidly, both in research and more and more into our everyday lives. Wireless sensor networks are a prime example of a new technology that has gained a lot of attention in the literature, and that is going to enhance the way we view and interact with the enviro
Necessary and Sufficient Condition for Quantum State-Independent Contextuality.
Cabello, Adán; Kleinmann, Matthias; Budroni, Costantino
2015-06-26
We solve the problem of whether a set of quantum tests reveals state-independent contextuality and use this result to identify the simplest set of the minimal dimension. We also show that identifying state-independent contextuality graphs [R. Ramanathan and P. Horodecki, Phys. Rev. Lett. 112, 040404 (2014)] is not sufficient for revealing state-independent contextuality.
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design.
Zhang, Shaoqiang; Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html.
With age a lower individual breathing reserve is associated with a higher maximal heart rate.
Burtscher, Martin; Gatterer, Hannes; Faulhaber, Martin; Burtscher, Johannes
2017-09-14
Maximal heart rate (HRmax) is linearly declining with increasing age. Regular exercise training is supposed to partly prevent this decline, whereas sex and habitual physical activity do not. High exercise capacity is associated with a high cardiac output (HR x stroke volume) and high ventilatory requirements. Due to the close cardiorespiratory coupling, we hypothesized that the individual ventilatory response to maximal exercise might be associated with the age-related HRmax. Retrospective analyses have been conducted on the results of 129 consecutively performed routine cardiopulmonary exercise tests. The study sample comprised healthy subjects of both sexes of a broad range of age (20-86 years). Maximal values of power output, minute ventilation, oxygen uptake and heart rate were assessed by the use of incremental cycle spiroergometry. Linear multivariate regression analysis revealed that in addition to age the individual breathing reserve at maximal exercise was independently predictive for HRmax. A lower breathing reserve due to a high ventilatory demand and/or a low ventilatory capacity, which is more pronounced at a higher age, was associated with higher HRmax. Age explained the observed variance in HRmax by 72% and was improved to 83% when the variable "breathing reserve" was entered. The presented findings indicate an independent association between the breathing reserve at maximal exercise and maximal heart rate, i.e. a low individual breathing reserve is associated with a higher age-related HRmax. A deeper understanding of this association has to be investigated in a more physiological scenario. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimating Rigid Transformation Between Two Range Maps Using Expectation Maximization Algorithm
Zeng, Shuqing
2012-01-01
We address the problem of estimating a rigid transformation between two point sets, which is a key module for target tracking system using Light Detection And Ranging (LiDAR). A fast implementation of Expectation-maximization (EM) algorithm is presented whose complexity is O(N) with $N$ the number of scan points.
The Effects of Rear-Wheel Camber on Maximal Effort Mobility Performance in Wheelchair Athletes
Mason, B.; van der Woude, L.; Tolfrey, K.; Goosey-Tolfrey, V.
This study examined the effect of rear-wheel camber on maximal effort wheelchair mobility performance. 14 highly trained wheelchair court sport athletes performed a battery of field tests in 4 standardised camber settings (15°, 18°, 20°, 24°) with performance analysed using a velocometer. 20 m
The Effects of Rear-Wheel Camber on Maximal Effort Mobility Performance in Wheelchair Athletes
Mason, B.; van der Woude, L.; Tolfrey, K.; Goosey-Tolfrey, V.
2012-01-01
This study examined the effect of rear-wheel camber on maximal effort wheelchair mobility performance. 14 highly trained wheelchair court sport athletes performed a battery of field tests in 4 standardised camber settings (15°, 18°, 20°, 24°) with performance analysed using a velocometer. 20 m sprin
The Effects of Rear-Wheel Camber on Maximal Effort Mobility Performance in Wheelchair Athletes
Mason, B.; van der Woude, L.; Tolfrey, K.; Goosey-Tolfrey, V.
2012-01-01
This study examined the effect of rear-wheel camber on maximal effort wheelchair mobility performance. 14 highly trained wheelchair court sport athletes performed a battery of field tests in 4 standardised camber settings (15°, 18°, 20°, 24°) with performance analysed using a velocometer. 20 m sprin
Maximizing scientific knowledge from randomized clinical trials
Gustafsson, Finn; Atar, Dan; Pitt, Bertram;
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...... variable. Generation of trial databases and/or biobanks originating in large randomized clinical trials has successfully increased the knowledge obtained from those trials. At the 10th Cardiovascular Trialist Workshop, possibilities and pitfalls in designing and accessing clinical trial databases were......, in particular with respect to collaboration with the trial sponsor and to analytic pitfalls. The advantages of creating screening databases in conjunction with a given clinical trial are described; and finally, the potential for posttrial database studies to become a platform for training young scientists...
Characterizing maximally singular phase-space distributions
Sperling, J.
2016-07-01
Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.
Maximization of eigenvalues using topology optimization
Pedersen, Niels Leergaard
2000-01-01
Topology optimization is used to optimize the eigenvalues of plates. The results are intended especially for MicroElectroMechanical Systems (MEMS) but call be seen as more general. The problem is not formulated as a case of reinforcement of an existing structure, so there is a problem related...... to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...... is a practical MEMS application; a probe used in an Atomic Force Microscope (AFM). For the AFM probe the optimization is complicated by a constraint on the stiffness and constraints on higher order eigenvalues....
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Paulo André da Conceição Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Paulo André Da Conceiçao Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
Reflection Quasilattices and the Maximal Quasilattice
Boyle, Latham
2016-01-01
We introduce the concept of a {\\it reflection quasilattice}, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e. Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we prove that reflection quasilattices only exist in dimensions two, three and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. We further show that, unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. W...
Evolution of correlated multiplexity through stability maximization
Dwivedi, Sanjiv K
2016-01-01
Investigating relation between various structural patterns found in real-world networks and stability of underlying systems is crucial to understand importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising of anti-symmetric couplings in one layer, depicting predator-prey relation, and symmetric couplings in the other, depicting mutualistic (or competitive) relation, based on stability maximization through the largest eigenvalue. We find that the correlated multiplexity emerges as evolution progresses. The evolved values of the correlated multiplexity exhibit a dependence on the inter-link coupling strength. Furthermore, the inter-layer coupling strength governs the evolution of disassortativity property in the individual layers. We provide analytical understanding to these findings by considering star like networks in both the layers. The model and tools used here are useful for understanding the principles governing the stability as well as importance of such patterns in ...
Greedy Maximal Scheduling in Wireless Networks
Li, Qiao
2010-01-01
In this paper we consider greedy scheduling algorithms in wireless networks, i.e., the schedules are computed by adding links greedily based on some priority vector. Two special cases are considered: 1) Longest Queue First (LQF) scheduling, where the priorities are computed using queue lengths, and 2) Static Priority (SP) scheduling, where the priorities are pre-assigned. We first propose a closed-form lower bound stability region for LQF scheduling, and discuss the tightness result in some scenarios. We then propose an lower bound stability region for SP scheduling with multiple priority vectors, as well as a heuristic priority assignment algorithm, which is related to the well-known Expectation-Maximization (EM) algorithm. The performance gain of the proposed heuristic algorithm is finally confirmed by simulations.
Dispatch Scheduling to Maximize Exoplanet Detection
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
A New Biflavone from Selaginella pulvinata Maxim
XU Kang-Ping; XU Zhi; DENG Yin-Hua; LI Fu-Shuang; ZHOU Ying-Jun; HU Gao-Yun; TAN Gui-Shan
2003-01-01
@@ Selaginella pulvinata Maxim. distributes all over the country of China and is used for the treatment for haemor rhage. [1] We studied on the chemical constituents of S. pulvinata in order to find the active compounds. Dried stems and leaves of S. pulvinata (6.5 kg) were extracted with 70% ethanol twice. The extract was evaporated under vacuum and than suspended in water, extracted with petroleum and EtOAc sequentially. The EtOAc extract was chromatographed on silica gel, eluted with CHCl3-MeOH. As a result, a novel biflavone, named pulvinatabiflavone, was obtained from fractions 75 ～ 78. Its structure was determined on the basis of spectroscopic analysis as 5,5″, 4′″ trihydroxy-7,7″-dimethoxy-[4′-O-6″]-biflavone (compound 1).
Iterative Schemes for Generalized Equilibrium Problem and Two Maximal Monotone Operators
Yao JC
2009-01-01
Full Text Available The purpose of this paper is to introduce and study two new hybrid proximal-point algorithms for finding a common element of the set of solutions to a generalized equilibrium problem and the sets of zeros of two maximal monotone operators in a uniformly smooth and uniformly convex Banach space. We established strong and weak convergence theorems for these two modified hybrid proximal-point algorithms, respectively.
Planat, Michel
2012-01-01
Employing five commuting sets of five-qubit observables, we propose specific 160-661 and 160-21 state proofs of the Bell-Kochen-Specker theorem that are also proofs of Bell's theorem. A histogram of the 'Hilbert-Schmidt' distances between the corresponding maximal bases shows in both cases a noise-like behaviour. The five commuting sets are also ascribed a finite-geometrical meaning in terms of the structure of symplectic polar space W(9,2).
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
THE EFFECTS MAXIMAL AND SUB MAXIMAL AEROBIC EXERCISE ON THE BRONCHOSPASM INDICES IN NON ATHLETIC
Amir GANJİ
2012-08-01
Full Text Available Background: Exercise-induced bronchospasm (EIB is a transient airway obstruction that occurs during and after the exercise. Exercise-induced bronchospasm is observed in healthy individuals as well as the asthmatic and allergic rhinitis patients. Research question: The study compared the effects of one session of submaximal aerobic exercise and a maximal one on the prevalence of exercise-induced bronchospasm in non-athletic students. Type of study: An experimental study, using human subjects, was designed. Methods: 20 non-athletic male students participated in two sessions of aerobic exercise. The prevalence of EIB was investigated among them. The criteria for assessing exercise-induced bronchospasm were ≥10% fall in FEV1, ≥15% fall in FEF25-75%, or ≥25% fall in PEFR. Results: The results revealed that the maximal exercise did not affect FEF25-75% and PEF, but it led to a meaningful reduction in FEV1. Contrarily, the submaximal exercise affected none of these indices. That is, in both protocols the same result was obtained for PEF and FEF25-75. Moreover, the prevalence of EIB was 15% in the submaximal exercise and 20% in the maximal one. Actually, this difference was significant. Conclusion: This study demonstrated that in contrast to the subjects who performed submaximal exercise, those who participated in the maximal protocol showed more changes in the pulmonary function indices and the prevalence of EIB was greater among them.
Innovative Conference Curriculum: Maximizing Learning and Professionalism
Hyland, Nancy; Kranzow, Jeannine
2012-01-01
This action research study evaluated the potential of an innovative curriculum to move 73 graduate students toward professional development. The curriculum was grounded in the professional conference and utilized the motivation and expertise of conference presenters. This innovation required students to be more independent, act as a critical…
Maximal elements of non necessarily acyclic binary relations
Josep Enric Peris Ferrando; Begoña Subiza Martínez
1992-01-01
The existence of maximal elements for binary preference relations is analyzed without imposing transitivity or convexity conditions. From each preference relation a new acyclic relation is defined in such a way that some maximal elements of this new relation characterize maximal elements of the original one. The result covers the case whereby the relation is acyclic.
Independent candidates in Mexico
Campos, Gonzalo Santiago
2014-01-01
In this paper we discuss the issue of independent candidates in Mexico, because through the so-called political reform of 2012 was incorporated in the Political Constitution of the Mexican United States the right of citizens to be registered as independent candidates. Also, in September 2013 was carried out a reform of Article 116 of the Political Constitution of the Mexican United States in order to allow independent candidates in each state of the Republic. However, prior to the constitutio...
Adaptive Influence Maximization in Social Networks: Why Commit when You can Adapt?
Vaswani, Sharan; Lakshmanan, Laks V. S.
2016-01-01
Most previous work on influence maximization in social networks is limited to the non-adaptive setting in which the marketer is supposed to select all of the seed users, to give free samples or discounts to, up front. A disadvantage of this setting is that the marketer is forced to select all the seeds based solely on a diffusion model. If some of the selected seeds do not perform well, there is no opportunity to course-correct. A more practical setting is the adaptive setting in which the ma...
Gravity Independent Compressor Project
National Aeronautics and Space Administration — We propose to develop and demonstrate a small, gravity independent, vapor compression refrigeration system using a linear motor compressor which effectively...
Accounting for Independent Schools.
Sonenstein, Burton
The diversity of independent schools in size, function, and mode of operation has resulted in a considerable variety of accounting principles and practices. This lack of uniformity has tended to make understanding, evaluation, and comparison of independent schools' financial statements a difficult and sometimes impossible task. This manual has…
Independence of Internal Auditors.
Montondon, Lucille; Meixner, Wilda F.
1993-01-01
A survey of 288 college and university auditors investigated patterns in their appointment, reporting, and supervisory practices as indicators of independence and objectivity. Results indicate a weakness in the positioning of internal auditing within institutions, possibly compromising auditor independence. Because the auditing function is…
Fostering Musical Independence
Shieh, Eric; Allsup, Randall Everett
2016-01-01
Musical independence has always been an essential aim of musical instruction. But this objective can refer to everything from high levels of musical expertise to more student choice in the classroom. While most conceptualizations of musical independence emphasize the demonstration of knowledge and skills within particular music traditions, this…
Anderson, Edward
2013-01-01
This paper concerns what Background Independence itself is (as opposed to some particular physical theory that is background independent). The notions presented mostly arose from a layer-by-layer analysis of the facets of the Problem of Time in Quantum Gravity. Part of this coincides with two relational postulates which are thus identified as classical precursors of two of the facets of the Problem of Time. These are furthemore tied to the forms of each of the GR Hamiltonian and momentum constraints. Other aspects of Background Independence include the algebraic closure of these constraints, expressing physics in terms of beables, foliation independence as implemented by refoliation invariance, the reconstruction of spacetime from space. The final picture is that Background Independence - a philosophically desirable and physically implementable feature for a theory to have - has the facets of the Problem of Time among its consequences. Thus these arise naturally and are problems to be resolved, as opposed to ...
Maximization Paradox: Result of Believing in an Objective Best.
Luan, Mo; Li, Hong
2017-05-01
The results from four studies provide reliable evidence of how beliefs in an objective best influence the decision process and subjective feelings. A belief in an objective best serves as the fundamental mechanism connecting the concept of maximizing and the maximization paradox (i.e., expending great effort but feeling bad when making decisions, Study 1), and randomly chosen decision makers operate similar to maximizers once they are manipulated to believe that the best is objective (Studies 2A, 2B, and 3). In addition, the effect of a belief in an objective best on the maximization paradox is moderated by the presence of a dominant option (Study 3). The findings of this research contribute to the maximization literature by demonstrating that believing in an objective best leads to the maximization paradox. The maximization paradox is indeed the result of believing in an objective best.
Large margin image set representation and classification
Wang, Jim Jing-Yan
2014-07-06
In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Joan Garriga
Full Text Available The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i minimize the need of supervision, (ii reduce computational costs, (iii minimize the need of prior assumptions (e.g. simple parametrizations, and (iv capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC, a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC, a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering. Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.
EXPLANATORY VARIANCE IN MAXIMAL OXYGEN UPTAKE
Jacalyn J. Robert McComb
2006-06-01
Full Text Available The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females, ages 18 - 24 years, underwent the following testing procedures: (a a 7-site skin fold assessment; (b a land VO2max running treadmill test; and (c a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants' head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF, height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27 of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF.
Network channel allocation and revenue maximization
Hamalainen, Timo; Joutsensalo, Jyrki
2002-09-01
This paper introduces a model that can be used to share link capacity among customers under different kind of traffic conditions. This model is suitable for different kind of networks like the 4G networks (fast wireless access to wired network) to support connections of given duration that requires a certain quality of service. We study different types of network traffic mixed in a same communication link. A single link is considered as a bottleneck and the goal is to find customer traffic profiles that maximizes the revenue of the link. Presented allocation system accepts every calls and there is not absolute blocking, but the offered data rate/user depends on the network load. Data arrival rate depends on the current link utilization, user's payment (selected CoS class) and delay. The arrival rate is (i) increasing with respect to the offered data rate, (ii) decreasing with respect to the price, (iii) decreasing with respect to the network load, and (iv) decreasing with respect to the delay. As an example, explicit formula obeying these conditions is given and analyzed.
Evolution of correlated multiplexity through stability maximization
Dwivedi, Sanjiv K.; Jalan, Sarika
2017-02-01
Investigating the relation between various structural patterns found in real-world networks and the stability of underlying systems is crucial to understand the importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising antisymmetric couplings in one layer depicting predator-prey relationship and symmetric couplings in the other depicting mutualistic (or competitive) relationship, based on stability maximization through the largest eigenvalue of the corresponding adjacency matrices. We find that there is an emergence of the correlated multiplexity between the mirror nodes as the evolution progresses. Importantly, evolved values of the correlated multiplexity exhibit a dependence on the interlayer coupling strength. Additionally, the interlayer coupling strength governs the evolution of the disassortativity property in the individual layers. We provide analytical understanding to these findings by considering starlike networks representing both the layers. The framework discussed here is useful for understanding principles governing the stability as well as the importance of various patterns in the underlying networks of real-world systems ranging from the brain to ecology which consist of multiple types of interaction behavior.
Maximal respiratory pressure in healthy Japanese children
Tagami, Miki; Okuno, Yukako; Matsuda, Tadamitsu; Kawamura, Kenta; Shoji, Ryosuke; Tomita, Kazuhide
2017-01-01
[Purpose] Normal values for respiratory muscle pressures during development in Japanese children have not been reported. The purpose of this study was to investigate respiratory muscle pressures in Japanese children aged 3–12 years. [Subjects and Methods] We measured respiratory muscle pressure values using a manovacuometer without a nose clip, with subjects in a sitting position. Data were collected for ages 3–6 (Group I: 68 subjects), 7–9 (Group II: 86 subjects), and 10–12 (Group III: 64 subjects) years. [Results] The values for respiratory muscle pressures in children were significantly higher with age in both sexes, and were higher in boys than in girls. Correlation coefficients were significant at values of 0.279 to 0.471 for each gender relationship between maximal respiratory pressure and age, height, and weight, respectively. [Conclusion] In this study, we showed pediatric respiratory muscle pressure reference value for each age. In the present study, values for respiratory muscle pressures were lower than Brazilian studies. This suggests that differences in respiratory muscle pressures vary with ethnicity. PMID:28356644
Maximizing exosome colloidal stability following electroporation.
Hood, Joshua L; Scott, Michael J; Wickline, Samuel A
2014-03-01
Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo.
Constructive Sets in Computable Sets
傅育熙
1997-01-01
The original interpretation of the constructive set theory CZF in Martin-Loef's type theory uses the‘extensional identity types’.It is generally believed that these‘types’do not belong to type theory.In this paper it will be shown that the interpretation goes through without identity types.This paper will also show that the interpretation can be given in an intensional type theory.This reflects the computational nature of the interpretation.This computational aspect is reinforced by an ω-Set moel of CZF.
The Maximal Runaway Temperature of Earth-like Planets
Shaviv, Nir J; Wehrse, Rainer
2012-01-01
We generalize the problem of the semi-gray model to cases in which a non-negligible fraction of the stellar radiation falls on the long-wavelength range, and/or that the planetary long-wavelength emission penetrates into the transparent short wavelength domain of the absorption. Second, applying the most general assumptions and independently of any particular properties of an absorber, we show that the greenhouse effect saturates and any Earth-like planet has a maximal temperature which depends on the type of and distance to its main-sequence star, its albedo and the primary atmospheric components which determine the cutoff frequency below which the atmosphere is optically thick. For example, a hypothetical convection-less planet similar to Venus, that is optically thin in the visible, could have at most a surface temperature of 1200-1300K irrespective of the nature of the greenhouse gas. We show that two primary mechanisms are responsible for the saturation of the runaway greenhouse effect, depending on the ...
Maximizing protection from use of oral cholera vaccines in developing country settings
Desai, Sachin N; Cravioto, Alejandro; Sur, Dipika; Kanungo, Suman
2014-01-01
When oral vaccines are administered to children in lower- and middle-income countries, they do not induce the same immune responses as they do in developed countries. Although not completely understood, reasons for this finding include maternal antibody interference, mucosal pathology secondary to infection, malnutrition, enteropathy, and previous exposure to the organism (or related organisms). Young children experience a high burden of cholera infection, which can lead to severe acute dehydrating diarrhea and substantial mortality and morbidity. Oral cholera vaccines show variations in their duration of protection and efficacy between children and adults. Evaluating innate and memory immune response is necessary to understand V. cholerae immunity and to improve current cholera vaccine candidates, especially in young children. Further research on the benefits of supplementary interventions and delivery schedules may also improve immunization strategies. PMID:24861554
Algorithmic and Complexity Results for Cutting Planes Derived from Maximal Lattice-Free Convex Sets
Basu, Amitabh; Köppe, Matthias
2011-01-01
We study a mixed integer linear program with m integer variables and k non-negative continuous variables in the form of the relaxation of the corner polyhedron that was introduced by Andersen, Louveaux, Weismantel and Wolsey [Inequalities from two rows of a simplex tableau, Proc. IPCO 2007, LNCS, vol. 4513, Springer, pp. 1--15]. We describe the facets of this mixed integer linear program via the extreme points of a well-defined polyhedron. We then utilize this description to give polynomial time algorithms to derive valid inequalities with optimal l_p norm for arbitrary, but fixed m. For the case of m=2, we give a refinement and a new proof of a characterization of the facets by Cornuejols and Margot [On the facets of mixed integer programs with two integer variables and two constraints, Math. Programming 120 (2009), 429--456]. The key point of our approach is that the conditions are much more explicit and can be tested in a more direct manner, removing the need for a reduction algorithm. These results allow ...
On the existence of maximizing measures for irreducible countable Markov shifts: a dynamical proof
Bissacot, Rodrigo
2011-01-01
We prove that if $\\Sigma_{\\mathbf A}(\\mathbb N)$ is an irreducible Markov shift space over $\\mathbb N$ and $f:\\Sigma_{\\mathbf A}(\\mathbb N) \\rightarrow \\mathbb R$ is coercive with bounded variation then there exists a maxi-mizing probability measure for $f$, whose support lies on a Markov subshift over a finite alphabet. Furthermore, the support of any maximizing measure is contained in this same compact subshift. To the best of our knowledge, this is the first proof of the existence of maximizing measures beyond the finitely primitive case on the non-compact setting. It's also noteworthy that our technique works in the case of the full shift over positive real sequences.
Probabilistic conditional independence structures
Studeny, Milan
2005-01-01
Probabilistic Conditional Independence Structures provides the mathematical description of probabilistic conditional independence structures; the author uses non-graphical methods of their description, and takes an algebraic approach.The monograph presents the methods of structural imsets and supermodular functions, and deals with independence implication and equivalence of structural imsets.Motivation, mathematical foundations and areas of application are included, and a rough overview of graphical methods is also given.In particular, the author has been careful to use suitable terminology, and presents the work so that it will be understood by both statisticians, and by researchers in artificial intelligence.The necessary elementary mathematical notions are recalled in an appendix.
Scheinker, Alexander; Baily, Scott; Young, Daniel; Kolski, Jeffrey S.; Prokop, Mark
2014-08-01
In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic field cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.
Heggelund, Jørn; Fimland, Marius S; Helgerud, Jan; Hoff, Jan
2013-06-01
This study compared maximal strength training (MST) with equal training volume (kg × sets × repetitions) of conventional strength training (CON) primarily with regard to work economy, and second one repetition maximum (1RM) and rate of force development (RFD) of single leg knee extension. In an intra-individual design, one leg was randomized to knee-extension MST (4 or 5RM) and the other leg to CON (3 × 10RM) three times per week for 8 weeks. MST was performed with maximal concentric mobilization of force while CON was performed with moderate velocity. Eight untrained or moderately trained men (26 ± 1 years) completed the study. The improvement in gross work economy was -0.10 ± 0.08 L min(-1) larger after MST (P = 0.011, between groups). From pre- to post-test the MST and CON improved net work economy with 31 % (P < 0.001) and 18 % (P = 0.01), respectively. Compared with CON, the improvement in 1RM and dynamic RFD was 13.7 ± 8.4 kg (P = 0.002) and 587 ± 679 N s(-1) (P = 0.044) larger after MST, whereas isometric RFD was of borderline significance 3,028 ± 3,674 N s(-1) (P = 0.053). From pre- to post-test, MST improved 1RM and isometric RFD with 50 % (P < 0.001) and 155 % (P < 0.001), respectively whereas CON improved 1RM and isometric RFD with 35 % (P < 0.001) and 83 % (P = 0.028), respectively. Anthropometric measures of quadriceps femoris muscle mass and peak oxygen uptake did not change. In conclusion, 8 weeks of MST was more effective than CON for improving work economy, 1RM and RFD in untrained and moderately trained men. The advantageous effect of MST to improve work economy could be due to larger improvements in 1RM and RFD.
Maximizing Experiential Learning for Student Success
Coker, Jeffrey Scott; Porter, Desiree Jasmine
2015-01-01
Several years ago, Elon University set out to better understand experiential learning on campus. At the time, there was a pragmatic need to collect data that would inform revisions to the core curriculum, including an experiential-learning requirement (ELR) that had been in place since 1994. The question was whether it made sense to raise the…
Matching, Demand, Maximization, and Consumer Choice
Wells, Victoria K.; Foxall, Gordon R.
2013-01-01
The use of behavioral economics and behavioral psychology in consumer choice has been limited. The current study extends the study of consumer behavior analysis, a synthesis between behavioral psychology, economics, and marketing, to a larger data set. This article presents the current work and results from the early analysis of the data. We…
Matching, Demand, Maximization, and Consumer Choice
Wells, Victoria K.; Foxall, Gordon R.
2013-01-01
The use of behavioral economics and behavioral psychology in consumer choice has been limited. The current study extends the study of consumer behavior analysis, a synthesis between behavioral psychology, economics, and marketing, to a larger data set. This article presents the current work and results from the early analysis of the data. We…
The rank-size scaling law and entropy-maximizing principle
Chen, Yanguang
2012-02-01
The rank-size regularity known as Zipf's law is one of the scaling laws and is frequently observed in the natural living world and social institutions. Many scientists have tried to derive the rank-size scaling relation through entropy-maximizing methods, but they have not been entirely successful. By introducing a pivotal constraint condition, I present here a set of new derivations based on the self-similar hierarchy of cities. First, I derive a pair of exponent laws by postulating local entropy maximizing. From the two exponential laws follows a general hierarchical scaling law, which implies the general form of Zipf's law. Second, I derive a special hierarchical scaling law with the exponent equal to 1 by postulating global entropy maximizing, and this implies the pure form of Zipf's law. The rank-size scaling law has proven to be one of the special cases of the hierarchical scaling law, and the derivation suggests a certain scaling range with the first or the last data point as an outlier. The entropy maximization of social systems differs from the notion of entropy increase in thermodynamics. For urban systems, entropy maximizing suggests the greatest equilibrium between equity for parts/individuals and efficiency of the whole.
A New Minimal Rough Set Axiom Group
DAI Jian-hua
2004-01-01
Rough set axiomatization is one aspect of rough set study, and the purpose is to characterize rough set theory using independable and minimal axiom groups. Thus, rough set theory can be studied by logic and axiom system methods. To characterize rough set theory, an axiom group named H consisting of 4 axioms, is proposed. That validity of the axiom group in characterizing rough set theory is reasonable, is proved. Simultaneously, the minimization of the axiom group, which requires that each axiom is an inequality and each is independent, is proved. The axiom group is helpful for researching rough set theory by logic and axiom system methods.
Is energy expenditure taken into account in human sub-maximal jumping?--A simulation study.
Vanrenterghem, Jos; Bobbert, Maarten F; Casius, L J Richard; De Clercq, Dirk
2008-02-01
This paper presents a simulation study that was conducted to investigate whether the stereotyped motion pattern observed in human sub-maximal jumping can be interpreted from the perspective of energy expenditure. Human sub-maximal vertical countermovement jumps were compared to jumps simulated with a forward dynamic musculo-skeletal model. This model consisted of four interconnected rigid segments, actuated by six Hill-type muscle actuators. The only independent input of the model was the stimulation of muscles as a function of time. This input was optimized using an objective function, in which targeting a specific sub-maximal height value was combined with minimizing the amount of muscle work produced. The characteristic changes in motion pattern observed in humans jumping to different target heights were reproduced by the model. As the target height was lowered, two major changes occurred in the motion pattern. First, the countermovement amplitude was reduced; this helped to save energy because of reduced dissipation and regeneration of energy in the contractile elements. Second, the contribution of rotation of the heavy proximal segments of the lower limbs to the vertical velocity of the centre of gravity at take-off was less; this helped to save energy because of reduced ineffective rotational energies at take-off. The simulations also revealed that, with the observed movement adaptations, muscle work was reduced through improved relative use of the muscle's elastic properties in sub-maximal jumping. According to the results of the simulations, the stereotyped motion pattern observed in sub-maximal jumping is consistent with the idea that in sub-maximal jumping, subjects are trying to achieve the targeted jump height with minimal energy expenditure.
Is bi-maximal mixing compatible with the large angle MSW solution of the solar neutrino problem?
1998-01-01
It is shown that the large angle MSW solution of the solar neutrino problem with a bi-maximal neutrino mixing matrix implies an energy-independent suppression of the solar nu_e flux. The present solar neutrino data exclude this solution of the solar neutrino problem at 99.6% CL.
A Maximal Tractable Class of Soft Constraints
Cohen, D; Jeavons, P; Krokhin, A; 10.1613/jair.1400
2011-01-01
Many researchers in artificial intelligence are beginning to explore the use of soft constraints to express a set of (possibly conflicting) problem requirements. A soft constraint is a function defined on a collection of variables which associates some measure of desirability with each possible combination of values for those variables. However, the crucial question of the computational complexity of finding the optimal solution to a collection of soft constraints has so far received very little attention. In this paper we identify a class of soft binary constraints for which the problem of finding the optimal solution is tractable. In other words, we show that for any given set of such constraints, there exists a polynomial time algorithm to determine the assignment having the best overall combined measure of desirability. This tractable class includes many commonly-occurring soft constraints, such as 'as near as possible' or 'as soon as possible after', as well as crisp constraints such as 'greater than'. F...
An Online Algorithm for Maximizing Submodular Functions
2007-12-20
0, acheiving an approximation ratio of 1 − 1 e + for MAX k-COVERAGE is NP-hard [10]. Recently, Feige, Lovász, and Tetali [11] introduced MIN...Journal of the ACM, 45(4):634– 652, 1998. [11] Uriel Feige, László Lovász, and Prasad Tetali . Approximating min sum set cover. Algorith- mica, 40(4
Catalan Number and Enumeration of Maximal Outerplanar Graphs
无
2000-01-01
Catalan number is an important class of combinatorial numbers. The maximal outerplanar graphs are important in graph theory. In this paper some formulas to enumerate the numbers of maximal outerplanar graphs by means of the compressing graph and group theory method are given first. Then the relationships between Catalan numbers and the numbers of labeled and unlabeled maximal outerplanar graphs are presented. The computed results verified these formulas.
Maximality-Based Structural Operational Semantics for Petri Nets
Saīdouni, Djamel Eddine; Belala, Nabil; Bouneb, Messaouda
2009-03-01
The goal of this work is to exploit an implementable model, namely the maximality-based labeled transition system, which permits to express true-concurrency in a natural way without splitting actions on their start and end events. One can do this by giving a maximality-based structural operational semantics for the model of Place/Transition Petri nets in terms of maximality-based labeled transition systems structures.
Relative advantage, queue jumping, and welfare maximizing wealth distribution
2006-01-01
Suppose individuals get utilities from the total amount of wealth they hold and from their wealth relative to those immediately below them. This paper studies the distribution of wealth that maximizes an additive welfare function made up of these utilities. It interprets wealth distribution in a control theory framework to show that the welfare maximizing distribution may have unexpected properties. In some circumstances it requires that inequality be maximized at the poorest and richest ends...
Maximizers versus satisficers: Decision-making styles, competence, and outcomes
Parker, Andrew M.; Wändi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al.\\ (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decision...
Maximally entangled states in pseudo-telepathy games
Mančinska, Laura
2015-01-01
A pseudo-telepathy game is a nonlocal game which can be won with probability one using some finite-dimensional quantum strategy but not using a classical one. Our central question is whether there exist two-party pseudo-telepathy games which cannot be won with probability one using a maximally entangled state. Towards answering this question, we develop conditions under which maximally entangled states suffice. In particular, we show that maximally entangled states suffice for weak projection...
Sums of magnetic eigenvalues are maximal on rotationally symmetric domains
Laugesen, Richard S; Roy, Arindam
2011-01-01
The sum of the first n energy levels of the planar Laplacian with constant magnetic field of given total flux is shown to be maximal among triangles for the equilateral triangle, under normalization of the ratio (moment of inertia)/(area)^3 on the domain. The result holds for both Dirichlet and Neumann boundary conditions, with an analogue for Robin (or de Gennes) boundary conditions too. The square similarly maximizes the eigenvalue sum among parallelograms, and the disk maximizes among ellipses. More generally, a domain with rotational symmetry will maximize the magnetic eigenvalue sum among all linear images of that domain. These results are new even for the ground state energy (n=1).
Sums of Laplace eigenvalues - rotationally symmetric maximizers in the plane
Laugesen, R S
2010-01-01
The sum of the first $n \\geq 1$ eigenvalues of the Laplacian is shown to be maximal among triangles for the equilateral triangle, maximal among parallelograms for the square, and maximal among ellipses for the disk, provided the ratio $\\text{(area)}^3/\\text{(moment of inertia)}$ for the domain is fixed. This result holds for both Dirichlet and Neumann eigenvalues, and similar conclusions are derived for Robin boundary conditions and Schr\\"odinger eigenvalues of potentials that grow at infinity. A key ingredient in the method is the tight frame property of the roots of unity. For general convex plane domains, the disk is conjectured to maximize sums of Neumann eigenvalues.
Independent technical review, handbook
1994-02-01
Purpose Provide an independent engineering review of the major projects being funded by the Department of Energy, Office of Environmental Restoration and Waste Management. The independent engineering review will address questions of whether the engineering practice is sufficiently developed to a point where a major project can be executed without significant technical problems. The independent review will focus on questions related to: (1) Adequacy of development of the technical base of understanding; (2) Status of development and availability of technology among the various alternatives; (3) Status and availability of the industrial infrastructure to support project design, equipment fabrication, facility construction, and process and program/project operation; (4) Adequacy of the design effort to provide a sound foundation to support execution of project; (5) Ability of the organization to fully integrate the system, and direct, manage, and control the execution of a complex major project.
Distro’: Independent Creativity for Independent Industr
Wiwik Sri Wulandari
2014-11-01
Full Text Available To shortened this introduction, ‘Distro’ is one of cultural phenomenon in theyoung generation nowadays. The word of ‘Distro’ is the shortened of DistributionOutlet. The phenomenon of ‘Distro’ has been some kind of new trends inproducing and distributing creative design products of goods amongst theyoungsters independently, in an independence industry that open for challengingand competitiveness for everyone. This field research has been done in the city ofYogyakarta, reknown as the second city in creative design products after the cityof Bandung. Yogyakarta is welknown as the students’ city as well as the capital cityof culture of Indonesia. As a students’ city it is normal that Yogyakarta is growingin numbers of young people who pursued to study here and enriched the cultureof the city to become more multicultural and the varieties of pluralism as well.This sociocultural phenomenon not only brought some dynamic changing tosociety, economy and cultural life of the city, but also social problems that needsto be overcome. My first research question then is about how the existence of‘Distro’ in Yogyakarta can be a positive answer for social problems that may arisesfrom the hegemony of globalization markets domestically? My second questionis how the creative product designs are being made and distributed creatively inindependent industry? Lastly, my third question is dealling with the genres ofthe design products and how it can be a new trend in art expression? ‘Distro’ is aproduct of culture and it is also creating cultural change in some aspects of the lifeof the youngsters who are ‘Distro’ enthusiasts. ‘Distro’ phenomenon basically is anoffensive to the hegemony of internationally branded product design which turnsto become more over-dominated to the domestic markets and industry and thus,‘Distro’ has the spirit of survival whilts at the same time producing opportunity ofenterpreneurship
Enumerating Maximal Cliques in Temporal Graphs
2016-01-01
Dynamics of interactions play an increasingly important role in the analysis of complex networks. A modeling framework to capture this are temporal graphs. We focus on enumerating delta-cliques, an extension of the concept of cliques to temporal graphs: for a given time period delta, a delta-clique in a temporal graph is a set of vertices and a time interval such that all vertices interact with each other at least after every delta time steps within the time interval. Viard, Latapy, and Magni...
Renner, R
2007-01-01
Given a quantum system consisting of many parts, we show that symmetry of the system's state, i.e., invariance under swappings of the subsystems, implies that almost all of its parts are virtually identical and independent of each other. This result generalises de Finetti's classical representation theorem for infinitely exchangeable sequences of random variables as well as its quantum-mechanical analogue. It has applications in various areas of physics as well as information theory and cryptography. For example, in experimental physics, one typically collects data by running a certain experiment many times, assuming that the individual runs are mutually independent. Our result can be used to justify this assumption.
D2-brane Chern-Simons theories: F-maximization = a-maximization
Fluder, Martin
2015-01-01
We study a system of N D2-branes probing a generic Calabi-Yau three-fold singularity in the presence of a non-zero quantized Romans mass n. We argue that the low-energy effective N = 2 Chern-Simons quiver gauge theory flows to a superconformal fixed point in the IR, and construct the dual AdS_4 solution in massive IIA supergravity. We compute the free energy F of the gauge theory on S^3 using localization. In the large N limit we find F = c(nN)^{1/3}a^{2/3}, where c is a universal constant and a is the a-function of the "parent" four-dimensional N = 1 theory on N D3-branes probing the same Calabi-Yau singularity. It follows that maximizing F over the space of admissible R-symmetries is equivalent to maximizing a for this class of theories. Moreover, we show that the gauge theory result precisely matches the holographic free energy of the supergravity solution, and provide a similar matching of the VEV of a BPS Wilson loop operator.
Universally Utility-Maximizing Privacy Mechanisms
Ghosh, Arpita; Sundararajan, Mukund
2008-01-01
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of {\\em differential privacy}, which requires that a mechanism's output distribution is nearly the same (in a strong sense) whether or not a given database row is included or excluded. In this paper, we pursue much strong and general utility guarantees. We seek a mechanism that guarantees near-optimal utility to every potential user, independent of its side information. Formally, we model the side information of a potential user as a prior distribution over query results. An interaction between a user and a mechanism induces a posterior distribution, and we define the utility of the mechanism for this user as the accuracy of this posterior, as quantified via a user-specific loss function. A differentially private mechanism $M$ is (near-)optimal for a given user $u$ if $u$ derives (almost) as much utility from $M$ a...
The generalized scheme-independent Crewther relation in QCD
Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; Brodsky, Stanley J.
2017-07-01
The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton–nucleon scattering times the Adler function, defined from the cross section for electron–positron annihilation into hadrons, has no pQCD radiative corrections. The “Generalized Crewther Relation” relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp) at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient CBjp-1 have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, αˆd(Q)=Σi≥1αˆg1i(Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Similar
The generalized scheme-independent Crewther relation in QCD
Jian-Ming Shen
2017-07-01
Full Text Available The Principle of Maximal Conformality (PMC provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton–nucleon scattering times the Adler function, defined from the cross section for electron–positron annihilation into hadrons, has no pQCD radiative corrections. The “Generalized Crewther Relation” relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient CBjp−1 have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, αˆd(Q=∑i≥1αˆg1i(Qi, at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is
Independent School Administration.
Springer, E. Laurence
This book deals with the management of privately supported schools and offers guidelines on how these schools might be operated more effectively and economically. The discussions and conclusions are based on observations and data from case studies of independent school operations. The subjects discussed include the role and organization of…
Warming-Rasmussen, Bent; Quick, Reiner; Liempd, Dennis van
2011-01-01
article presents research contributions to the question whether the auditor is to continue to provide both audit and non-audit services (NAS) to an audit client. Research results show that this double function for the same audit client is a problem for stakeholders' confidence in auditor independence...
Detrimental Relations of Maximization with Academic and Career Attitudes
Dahling, Jason J.; Thompson, Mindi N.
2013-01-01
Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…
An Overview of Maximal Unitarity at Two Loops
Johansson, Henrik; Larsen, Kasper J.
2012-01-01
We discuss the extension of the maximal-unitarity method to two loops, focusing on the example of the planar double box. Maximal cuts are reinterpreted as contour integrals, with the choice of contour fixed by the requirement that integrals of total derivatives vanish on it. The resulting formulae, like their one-loop counterparts, can be applied either analytically or numerically.
Haemodynamics during maximal exercise after coronary bypass surgery
P.W.J.C. Serruys (Patrick); M.F. Rousseau (Francois); J. Cosyns; R. Ponlot; L.A. Brasseur; J-M.R. Detry (Jean-Marie)
1978-01-01
textabstractFifty patients underwent an objective measurement of physical working capacity by means of a multistage test of maximally tolerated exertion before and after coronary bypass surgery; 29 patients also had haemodynamic measurements during maximal exercise before and after coronary bypass s
Utility maximization under solvency constraints and unhedgeable risks
T. Kleinow; A. Pelsser
2008-01-01
We consider the utility maximization problem for an investor who faces a solvency or risk constraint in addition to a budget constraint. The investor wishes to maximize her expected utility from terminal wealth subject to a bound on her expected solvency at maturity. We measure solvency using a solv
Detrimental Relations of Maximization with Academic and Career Attitudes
Dahling, Jason J.; Thompson, Mindi N.
2013-01-01
Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…
On a discrete version of Tanaka's theorem for maximal functions
Bober, Jonathan; Hughes, Kevin; Pierce, Lillian B
2010-01-01
In this paper we prove a discrete version of Tanaka's Theorem \\cite{Ta} for the Hardy-Littlewood maximal operator in dimension $n=1$, both in the non-centered and centered cases. For the discrete non-centered maximal operator $\\wM $ we prove that, given a function $f: \\Z \\to \\R$ of bounded variation,
Haemodynamics during maximal exercise after coronary bypass surgery
P.W.J.C. Serruys (Patrick); M.F. Rousseau (Francois); J. Cosyns; R. Ponlot; L.A. Brasseur; J-M.R. Detry (Jean-Marie)
1978-01-01
textabstractFifty patients underwent an objective measurement of physical working capacity by means of a multistage test of maximally tolerated exertion before and after coronary bypass surgery; 29 patients also had haemodynamic measurements during maximal exercise before and after coronary bypass
A Class of Maximal Functions with Oscillating Kernels
Ahmad AL-SALMAN
2007-01-01
The author studies the Lp mapping properties of a class of maximal functions that are related to oscillatory singular integral operators. Lp estimates, as well as the corresponding weighted estimates of such maximal functions, are obtained. Moreover, several applications of our results are highlighted.
ESTIMATES FOR THE MAXIMAL MULTILINEAR SINGULAR INTEGRAL OPERATORS
Yulan Jiao
2010-01-01
In this paper,some mapping properties are considered for the maximal multilinear singular integral operator whose kernel satisfies certain minimum regularity condition.It is proved that certain uniform local estimate for doubly truncated operators implies the LP(Rn)(1
maximal operator.
Maximally Flat Waveforms Operation of Class-F Power Amplifiers
V. Krizhanovski
2001-04-01
Full Text Available The requirements to output network's impedance on higher harmoniccomponents and appropriate input driving for formation maximally flatwaveforms of drain current and voltage were presented. Using suchwaveforms allows obtaining maximal efficiency and output powercapability of class-F power amplifiers.
Entanglement of Superpositions of Orthogonal Maximally Entangled States
ZHANG Dao-Hua; ZHOU Duan-Lu; FAN Heng
2010-01-01
@@ We study the entanglement properties of the superposed state of orthogonal maximally entangled states.It is shown that the superposed state is maximally entangled and the superposed state is separable.The relation between the superposed state and the mutually unbiased state is discussed.
CHROMATIC NUMBER OF SQUARE OF MAXIMAL OUTERPLANAR GRAPHS
Luo Xiaofang
2007-01-01
Let χ(G2) denote the chromatic number of the square of a maximal outerplanar graph G and Q denote a maximal outerplanar graph obtained by adding three chords and χ(G2) = Δ + 2 if and only if G is Q, where Δ represents the maximum degree of G.
SAR image target segmentation based on entropy maximization and morphology
柏正尧; 刘洲峰; 何佩琨
2004-01-01
Entropy maximization thresholding is a simple, effective image segmentation method. The relation between the histogram entropy and the gray level of an image is analyzed. An approach, which speeds the computation of optimal threshold based on entropy maximization, is proposed. The suggested method has been applied to the synthetic aperture radar (SAR) image targets segmentation. Mathematical morphology works well in reducing the residual noise.
Effect of Age and Other Factors on Maximal Heart Rate.
Londeree, Ben R.; Moeschberger, Melvin L.
1982-01-01
To reduce confusion regarding reported effects of age on maximal exercise heart rate, a comprehensive review of the relevant English literature was conducted. Data on maximal heart rate after exercising with a bicycle, a treadmill, and after swimming were analyzed with regard to physical fitness and to age, sex, and racial differences. (Authors/PP)
Maximal zero textures in Linear and Inverse seesaw
Roopam Sinha
2016-08-01
Full Text Available We investigate Linear and Inverse seesaw mechanisms with maximal zero textures of the constituent matrices subjected to the assumption of non-zero eigenvalues for the neutrino mass matrix mν and charged lepton mass matrix me. If we restrict to the minimally parametrized non-singular ‘me’ (i.e., with maximum number of zeros it gives rise to only 6 possible textures of me. Non-zero determinant of mν dictates six possible textures of the constituent matrices. We ask in this minimalistic approach, what phenomenologically allowed maximum zero textures are possible. It turns out that Inverse seesaw leads to 7 allowed two-zero textures while the Linear seesaw leads to only one. In Inverse seesaw, we show that 2 is the maximum number of independent zeros that can be inserted into μS to obtain all 7 viable two-zero textures of mν. On the other hand, in Linear seesaw mechanism, the minimal scheme allows maximum 5 zeros to be accommodated in ‘m’ so as to obtain viable effective neutrino mass matrices (mν. Interestingly, we find that our minimalistic approach in Inverse seesaw leads to a realization of all the phenomenologically allowed two-zero textures whereas in Linear seesaw only one such texture is viable. Next, our numerical analysis shows that none of the two-zero textures give rise to enough CP violation or significant δCP. Therefore, if δCP=π/2 is established, our minimalistic scheme may still be viable provided we allow larger number of parameters in ‘me’.
Livi, Lorenzo; Alippi, Cesare
2016-01-01
It is a widely accepted fact that the computational capability of recurrent neural networks is maximized on the so-called "edge of criticality". Once in this configuration, the network performs efficiently on a specific application both in terms of (i) low prediction error and (ii) high short-term memory capacity. Since the behavior of recurrent networks is strongly influenced by the particular input signal driving the dynamics, a universal, application-independent method for determining the edge of criticality is still missing. In this paper, we propose a theoretically motivated method based on Fisher information for determining the edge of criticality in recurrent neural networks. It is proven that Fisher information is maximized for (finite-size) systems operating in such critical regions. However, Fisher information is notoriously difficult to compute and either requires the probability density function or the conditional dependence of the system states with respect to the model parameters. The paper expl...
On the maximal efficiency of the collisional Penrose process
Leiderschneider, Elly
2015-01-01
The center of mass (CM) energy in a collisional Penrose process - a collision taking place within the ergosphere of a Kerr black hole - can diverge under suitable extreme conditions (maximal Kerr, near horizon collision and suitable impact parameters). We present an analytic expression for the CM energy, refining expressions given in the literature. Even though the CM energy diverges, we show that the maximal energy attained by a particle that escapes the black hole's gravitational pull and reaches infinity is modest. We obtain an analytic expression for the energy of an escaping particle resulting from a collisional Penrose process, and apply it to derive the maximal energy and the maximal efficiency for several physical scenarios: pair annihilation, Compton scattering, and the elastic scattering of two massive particles. In all physically reasonable cases (in which the incident particles initially fall from infinity towards the black hole) the maximal energy (and the corresponding efficiency) are only one o...
Plubtieng Somyot
2009-01-01
Full Text Available Abstract We introduce an iterative scheme for finding a common element of the solution set of a maximal monotone operator and the solution set of the variational inequality problem for an inverse strongly-monotone operator in a uniformly smooth and uniformly convex Banach space, and then we prove weak and strong convergence theorems by using the notion of generalized projection. The result presented in this paper extend and improve the corresponding results of Kamimura et al. (2004, and Iiduka and Takahashi (2008. Finally, we apply our convergence theorem to the convex minimization problem, the problem of finding a zero point of a maximal monotone operator and the complementary problem.
Somyot Plubtieng
2009-01-01
Full Text Available We introduce an iterative scheme for finding a common element of the solution set of a maximal monotone operator and the solution set of the variational inequality problem for an inverse strongly-monotone operator in a uniformly smooth and uniformly convex Banach space, and then we prove weak and strong convergence theorems by using the notion of generalized projection. The result presented in this paper extend and improve the corresponding results of Kamimura et al. (2004, and Iiduka and Takahashi (2008. Finally, we apply our convergence theorem to the convex minimization problem, the problem of finding a zero point of a maximal monotone operator and the complementary problem.
Submodular Function Maximization via the Multilinear Relaxation and Contention Resolution Schemes
Chekuri, Chandra; Zenklusen, Rico
2011-01-01
We consider the problem of maximizing a non-negative submodular set function $f:2^N \\rightarrow \\mathbb{R}_+$ over a ground set $N$ subject to a variety of packing type constraints. In this paper we develop a general framework leading to a number of new results, in particular when $f$ may be a {\\em non-monotone} function. Our algorithms are based on (approximately) maximizing the multilinear extension $F$ of $f$ \\cite{CCPV07} over a polytope $P$ that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully in some settings \\cite{CCPV09,KulikST09,LeeMNS09,CVZ10,BansalKNS10}, it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize $F$ over any down-closed polytope $P$ that has an efficient separation oracle. Previously this was known only for monotone functions \\cite{Vondrak08}. For non-monotone functions, a constant factor was known ...
Maximizing biomarker discovery by minimizing gene signatures
Chang Chang
2011-12-01
Full Text Available Abstract Background The use of gene signatures can potentially be of considerable value in the field of clinical diagnosis. However, gene signatures defined with different methods can be quite various even when applied the same disease and the same endpoint. Previous studies have shown that the correct selection of subsets of genes from microarray data is key for the accurate classification of disease phenotypes, and a number of methods have been proposed for the purpose. However, these methods refine the subsets by only considering each single feature, and they do not confirm the association between the genes identified in each gene signature and the phenotype of the disease. We proposed an innovative new method termed Minimize Feature's Size (MFS based on multiple level similarity analyses and association between the genes and disease for breast cancer endpoints by comparing classifier models generated from the second phase of MicroArray Quality Control (MAQC-II, trying to develop effective meta-analysis strategies to transform the MAQC-II signatures into a robust and reliable set of biomarker for clinical applications. Results We analyzed the similarity of the multiple gene signatures in an endpoint and between the two endpoints of breast cancer at probe and gene levels, the results indicate that disease-related genes can be preferably selected as the components of gene signature, and that the gene signatures for the two endpoints could be interchangeable. The minimized signatures were built at probe level by using MFS for each endpoint. By applying the approach, we generated a much smaller set of gene signature with the similar predictive power compared with those gene signatures from MAQC-II. Conclusions Our results indicate that gene signatures of both large and small sizes could perform equally well in clinical applications. Besides, consistency and biological significances can be detected among different gene signatures, reflecting the
Berthon, P; Fellmann, N
2002-09-01
The maximal aerobic velocity concept developed since eighties is considered as either the minimal velocity which elicits the maximal aerobic consumption or as the "velocity associated to maximal oxygen consumption". Different methods for measuring maximal aerobic velocity on treadmill in laboratory conditions have been elaborated, but all these specific protocols measure V(amax) either during a maximal oxygen consumption test or with an association of such a test. An inaccurate method presents a certain number of problems in the subsequent use of the results, for example in the elaboration of training programs, in the study of repeatability or in the determination of individual limit time. This study analyzes 14 different methods to understand their interests and limits in view to propose a general methodology for measuring V(amax). In brief, the test should be progressive and maximal without any rest period and of 17 to 20 min total duration. It should begin with a five min warm-up at 60-70% of the maximal aerobic power of the subjects. The beginning of the trial should be fixed so that four or five steps have to be run. The duration of the steps should be three min with a 1% slope and an increasing speed of 1.5 km x h(-1) until complete exhaustion. The last steps could be reduced at two min for a 1 km x h(-1) increment. The maximal aerobic velocity is adjusted in relation to duration of the last step.
Transfinite diameter of Bernstein sets in
Bialas-Cież Leokadia
2002-01-01
Full Text Available Let be a compact set in satisfying the following generalized Bernstein inequality: for each such that , for each polynomial of degree where is a constant independent of and , is an infinite set of natural numbers that is also independent of and . We give an estimate for the transfinite diameter of the set : For satisfying the usual Bernstein inequality (i.e., , we prove that
Maximal stochastic transport in the Lorenz equations
Agarwal, Sahil, E-mail: sahil.agarwal@yale.edu [Program in Applied Mathematics, Yale University, New Haven (United States); Wettlaufer, J.S., E-mail: john.wettlaufer@yale.edu [Program in Applied Mathematics, Yale University, New Haven (United States); Departments of Geology & Geophysics, Mathematics and Physics, Yale University, New Haven (United States); Mathematical Institute, University of Oxford, Oxford (United Kingdom); Nordita, Royal Institute of Technology and Stockholm University, Stockholm (Sweden)
2016-01-08
We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh–Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.
Quantum independent increment processes
Franz, Uwe
2005-01-01
This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.
Field Independent Cosmic Evolution
Nayem Sk
2013-01-01
Full Text Available It has been shown earlier that Noether symmetry does not admit a form of corresponding to an action in which is coupled to scalar-tensor theory of gravity or even for pure theory of gravity taking anisotropic model into account. Here, we prove that theory of gravity does not admit Noether symmetry even if it is coupled to tachyonic field and considering a gauge in addition. To handle such a theory, a general conserved current has been constructed under a condition which decouples higher-order curvature part from the field part. This condition, in principle, solves for the scale-factor independently. Thus, cosmological evolution remains independent of the form of the chosen field, whether it is a scalar or a tachyon.
石波
2000-01-01
@@ Ⅰ. Introduction At present, in the college, English extensive reading class, most students are not used to being independent. They always ask the teacher to explain the passages sentence by sentence and they need a lot of time to use the dictionary. Yet, we should take the responsibility for the students by making clear the difference between intensive and extensive reading. The traditional teaching approaches pays more attention to the teacher-centered way;the teacher always plays a monodrama, and the teacher dominates the class. The students are lack initiative. Some students do not know where they could start from, and the others are short of fast-reading skills, always fixing their eyes on one word or one sentence. Under the new situation and new thinking, students should learn to be more independent.
Bayesian Independent Component Analysis
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...... in a Matlab toolbox, is demonstrated for non-negative decompositions and compared with non-negative matrix factorization....
M.G. Bara Filho
2008-01-01
Full Text Available Strength and flexibility are common components of a training program and their maximal values are obtained through specific tests. However, little information about the damage effect of these training procedures in a skeletal muscle is known. Objective: To verify a serum CK changes 24 h after a sub maximal stretching routine and after the static flexibility and maximal strength tests. Methods: the sample was composed by 14 subjects (man and women, 28 ± 6 yr. physical education students. The volunteers were divided in a control group (CG and experimental group (EG that was submitted in a stretching routine (EG-ST, in a maximal flexibility static test (EG-FLEX and in 1-RM test (EG-1-RM, with one week interval among tests. The anthropometrics characteristics were obtained by digital scale with stadiometer (Filizola, São Paulo, Brasil, 2002. The blood samples were obtained using the IFCC method with reference values 26-155 U/L. The De Lorme and Watkins technique was used to access maximal maximal strength through bench press and leg press. The maximal flexibility test consisted in three 20 seconds sets until the point of maximal discomfort. The stretching was done in normal movement amplitude during 6 secons. Results: The basal and post 24 h CK values in CG and EG (ST; Flex and 1 RM were respectively 195,0 ± 129,5 vs. 202,1 ± 124,2; 213,3 ± 133,2 vs. 174,7 ± 115,8; 213,3 ± 133,2 vs. 226,6 ± 126,7 e 213,3 ± 133,2 vs. 275,9 ± 157,2. It was only observed a significant difference (p = 0,02 in the pre and post values inGE-1RM. Conclusion: only maximal strength dynamic exercise was capable to cause skeletal muscle damage.
Planat, Michel, E-mail: michel.planat@femto-st.fr [Institut FEMTO-ST, CNRS, 32 Avenue de l' Observatoire, F-25044 Besançon (France); Saniga, Metod, E-mail: msaniga@astro.sk [Astronomical Institute, Slovak Academy of Sciences, SK-05960 Tatranská Lomnica (Slovakia)
2012-10-15
Employing five commuting sets of five-qubit observables, we propose specific 160–661 and 160–21 state proofs of the Bell–Kochen–Specker theorem that are also proofs of Bell's theorem. A histogram of the ‘Hilbert–Schmidt’ distances between the corresponding maximal bases shows in both cases a noise-like behavior. The five commuting sets are also ascribed a finite-geometrical meaning in terms of the structure of symplectic polar space W(9,2).
Boccia, Gennaro; Dardanello, Davide; Tarperi, Cantor; Festa, Luca; La Torre, Antonio; Pellegrini, Barbara; Schena, Federico; Rainoldi, Alberto
2017-08-01
We examined whether the presence of fatigue induced by prolonged running influenced the time courses of force generating capacities throughout a series of intermittent rapid contractions. Thirteen male amateur runners performed a set of 15 intermittent isometric rapid contractions of the knee extensor muscles, (3s/5s on/off) the day before (PRE) and immediately after (POST) a half marathon. The maximal voluntary contraction force, rate of force development (RFDpeak), and their ratio (relative RFDpeak) were calculated. At POST, considering the first (out of 15) repetition, the maximal force and RFDpeak decreased (p<0.0001) at the same extent (by 22±6% and 24±22%, respectively), resulting in unchanged relative RFDpeak (p=0.6). Conversely, the decline of RFDpeak throughout the repetitions was more pronounced at POST (p=0.02), thus the decline of relative RFDpeak was more pronounced (p=0.007) at POST (-25±13%) than at PRE (-3±13%). The main finding of this study was that the fatigue induced by a half-marathon caused a more pronounced impairment of rapid compared to maximal force in the subsequent intermittent protocol. Thus, the fatigue-induced impairment in rapid muscle contractions may have a greater effect on repeated, rather than on single, attempts of maximal force production. Copyright © 2017 Elsevier B.V. All rights reserved.
Oliveira, Felipe B D; Oliveira, Anderson S C; Rizatto, Guilherme F; Denadai, Benedito S
2013-01-01
The aim of the present study was to verify whether strength training designed to improve explosive and maximal strength would influence rate of force development (RFD). Nine men participated in a 6-week knee extensors resistance training program and 9 matched subjects participated as controls. Throughout the training sessions, subjects were instructed to perform isometric knee extension as fast and forcefully as possible, achieving at least 90% maximal voluntary contraction as quickly as possible, hold it for 5 s, and relax. Fifteen seconds separated each repetition (6-10), and 2 min separated each set (3). Pre- and post-training measurements were maximal isometric knee extensor (MVC), RFD, and RFD relative to MVC (i.e., %MVC·s(-1)) in different time-epochs varying from 10 to 250 ms from the contraction onset. The MVC (Nm) increased by 19% (275.8 ± 64.9 vs. 329.8 ± 60.4, p force can be differently influenced by resistance training. Thus, the resistance training programs should consider the specific neuromuscular demands of each sport.In active non-strength trained individuals, a short-term resistance training program designed to increase both explosive and maximal strength seems to reduce the adaptive response (i.e. increased RFDMAX) evoked by training with an intended ballistic effort (i.e. high-RFD contraction).
The Rank-Size Scaling Law and Entropy-Maximizing Principle
Chen, Yanguang
2011-01-01
The rank-size regularity known as Zipf's law is one of scaling laws and frequently observed within the natural living world and in social institutions. Many scientists tried to derive the rank-size scaling relation by entropy-maximizing methods, but the problem failed to be resolved thoroughly. By introducing a pivotal constraint condition, I present here a set of new derivations based on the self-similar hierarchy of cities. First, I derive a pair of exponent laws by postulating local entropy maximizing. From the two exponential laws follows a general hierarchical scaling law, which implies general Zipf's law. Second, I derive a special hierarchical scaling law with exponent equal to 1 by postulating global entropy maximizing, and this implies the strong form of Zipf's law. The rank-size scaling law proved to be one of the special cases of the hierarchical law, and the derivation suggests a certain scaling range with the first or last data point as an outlier. The entropy maximization of social systems diffe...
Erol, Volkan; Ozaydin, Fatih; Altintas, Azmi Ali
2014-06-24
Entanglement has been studied extensively for unveiling the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known measures for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. It was found that for sets of non-maximally entangled states of two qubits, comparing these entanglement measures may lead to different entanglement orderings of the states. On the other hand, although it is not an entanglement measure and not monotonic under local operations, due to its ability of detecting multipartite entanglement, quantum Fisher information (QFI) has recently received an intense attraction generally with entanglement in the focus. In this work, we revisit the state ordering problem of general two qubit states. Generating a thousand random quantum states and performing an optimization based on local general rotations of each qubit, we calculate the maximal QFI for each state. We analyze the maximized QFI in comparison with concurrence, REE and negativity and obtain new state orderings. We show that there are pairs of states having equal maximized QFI but different values for concurrence, REE and negativity and vice versa.
Muscle Damage following Maximal Eccentric Knee Extensions in Males and Females.
K M Hicks
Full Text Available To investigate whether there is a sex difference in exercise induced muscle damage.Vastus Lateralis and patella tendon properties were measured in males and females using ultrasonography. During maximal voluntary eccentric knee extensions (12 reps x 6 sets, Vastus Lateralis fascicle lengthening and maximal voluntary eccentric knee extensions torque were recorded every 10° of knee joint angle (20-90°. Isometric torque, Creatine Kinase and muscle soreness were measured pre, post, 48, 96 and 168 hours post damage as markers of exercise induced muscle damage.Patella tendon stiffness and Vastus Lateralis fascicle lengthening were significantly higher in males compared to females (p0.05. Creatine Kinase levels post exercise induced muscle damage were higher in males compared to females (p<0.05, and remained higher when maximal voluntary eccentric knee extension torque, relative to estimated quadriceps anatomical cross sectional area, was taken as a covariate (p<0.05.Based on isometric torque loss, there is no sex difference in exercise induced muscle damage. The higher Creatine Kinase in males could not be explained by differences in maximal voluntary eccentric knee extension torque, Vastus Lateralis fascicle lengthening and patella tendon stiffness. Further research is required to understand the significant sex differences in Creatine Kinase levels following exercise induced muscle damage.
2012-08-20
... Services Administration. The purpose of these programs is to promote the independent living philosophy... with significant disabilities and to promote and maximize the integration and full inclusion of... programs is to promote the independent living philosophy--based on consumer control, peer support,...
Inquiry in bibliography some of the bustan`s maxim
sajjad rahmatian
2016-12-01
Full Text Available Sa`di is on of those poets who`s has placed a special position to preaching and guiding the people and among his works, allocated throughout the text of bustan to advice and maxim on legal and ethical various subjects. Surely, sa`di on the way of to compose this work and expression of its moral point, direct or indirect have been affected by some previous sources and possibly using their content. The main purpose of this article is that the pay review of basis and sources of bustan`s maxims and show that sa`di when expression the maxims of this work has been affected by which of the texts and works. For this purpose is tried to with search and research on the resources that have been allocated more or less to the aphorisms, to discover and extract traces of influence sa`di from their moral and didactic content. From the most important the finding of this study can be mentioned that indirect effect of some pahlavi books of maxim (like maxims of azarbad marespandan and bozorgmehr book of maxim and also noted sa`di directly influenced of moral and ethical works of poets and writers before him, and of this, sa`di`s influence from abo- shakur balkhi maxims, ferdowsi and keikavus is remarkable and noteworthy.
Maximizing efficiency on trauma surgeon rounds.
Ramaniuk, Aliaksandr; Dickson, Barbara J; Mahoney, Sean; O'Mara, Michael S
2017-01-01
Rounding by trauma surgeons is a complex multidisciplinary team-based process in the inpatient setting. Implementation of lean methodology aims to increase understanding of the value stream and eliminate nonvalue-added (NVA) components. We hypothesized that analysis of trauma rounds with education and intervention would improve surgeon efficacy. Level 1 trauma center with 4300 admissions per year. Average non-intensive care unit census was 55. Five full-time attending trauma surgeons were evaluated. Value-added (VA) and NVA components of rounding were identified. The components of each patient interaction during daily rounds were documented. Summary data were presented to the surgeons. An action plan of improvement was provided at group and individual interventions. Change plans were presented to the multidisciplinary team. Data were recollected 6 mo after intervention. The percent of interactions with NVA components decreased (16.0% to 10.7%, P = 0.0001). There was no change between the two periods in time of evaluation of individual patients (4.0 and 3.5 min, P = 0.43). Overall time to complete rounds did not change. There was a reduction in the number of interactions containing NVA components (odds ratio = 2.5). The trauma surgeons were able to reduce the NVA components of rounds. We did not see a decrease in rounding time or individual patient time. This implies that surgeons were able to reinvest freed time into patient care, or that the NVA components were somehow not increasing process time. Direct intervention for isolated improvements can be effective in the rounding process, and efforts should be focused upon improving the value of time spent rather than reducing time invested. Copyright © 2016 Elsevier Inc. All rights reserved.
The Maximal Graded Left Quotient Algebra of a Graded Algebra
Gonzalo ARANDA PINO; Mercedes SILES MOLINA
2006-01-01
We construct the maximal graded left quotient algebra of every graded algebra A without homogeneous total right zero divisors as the direct limit of graded homomorphisms (of left A-modules)from graded dense left ideals of A into a graded left quotient algebra of A. In the case of a superalgebra,and with some extra hypothesis, we prove that the component in the neutral element of the group of the maximal graded left quotient algebra coincides with the maximal left quotient algebra of the component in the neutral element of the group of the superalgebra.
Maximal regularity of second order delay equations in Banach spaces
无
2010-01-01
We give necessary and sufficient conditions of Lp-maximal regularity(resp.B sp ,q-maximal regularity or F sp ,q-maximal regularity) for the second order delay equations:u″(t)=Au(t) + Gu’t + F u t + f(t), t ∈ [0, 2π] with periodic boundary conditions u(0)=u(2π), u′(0)=u′(2π), where A is a closed operator in a Banach space X,F and G are delay operators on Lp([-2π, 0];X)(resp.Bsp ,q([2π, 0];X) or Fsp,q([-2π, 0;X])).
Building hospital TQM teams: effective polarity analysis and maximization.
Hurst, J B
1996-09-01
Building and maintaining teams require careful attention to and maximization of such polar opposites (¿polarities¿) as individual and team, directive and participatory leadership, task and process, and stability and change. Analyzing systematic elements of any polarity and listing blocks, supports, and flexible ways to maximize it will prevent the negative consequences that occur when treating a polarity like a solvable problem. Flexible, well-timed shifts from pole to pole result in the maximization of upside and minimization of downside consequences.
People believe each other to be selfish hedonic maximizers.
De Vito, Stefania; Bonnefon, Jean-François
2014-10-01
Current computational models of theory of mind typically assume that humans believe each other to selfishly maximize utility, for a conception of utility that makes it indistinguishable from personal gains. We argue that this conception is at odds with established facts about human altruism, as well as the altruism that humans expect from each other. We report two experiments showing that people expect other agents to selfishly maximize their pleasure, even when these other agents behave altruistically. Accordingly, defining utility as pleasure permits us to reconcile the assumption that humans expect each other to selfishly maximize utility with the fact that humans expect each other to behave altruistically.
Lp Estimates of Rough Maximal Functions Along Surfaces with Applications
Ahmad AL-SALMAN; Abdulla M. JARRAH
2016-01-01
In this paper, we study the Lp mapping properties of certain class of maximal oscillatory singular integral operators. We prove a general theorem for a class of maximal functions along surfaces. As a consequence of such theorem, we establish the Lp boundedness of various maximal oscillatory singular integrals provided that their kernels belong to the natural space L log L(Sn−1). Moreover, we highlight some additional results concerning operators with kernels in certain block spaces. The results in this paper substantially improve previously known results.
Independence among People with Disabilities: II. Personal Independence Profile.
Nosek, Margaret A.; And Others
1992-01-01
Developed Personal Independence Profile (PIP) as an instrument to measure aspects of independence beyond physical and cognitive functioning in people with diverse disabilities. PIP was tested for reliability and validity with 185 subjects from 10 independent living centers. Findings suggest that the Personal Independence Profile measures the…
Rough Set Theory over Fuzzy Lattices
Guilong Liu
2006-01-01
Rough set theory, proposed by Pawlak in 1982, is a tool for dealing with uncertainty and vagueness aspects of knowledge model. The main idea of roug h sets corresponds to the lower and upper approximations based on equivalence relations. This paper studies the rough set and its extension. In our talk, we present a linear algebra approach to rough set and its extension, give an equivalent definition of the lower and upper approximations of rough set based on the characteristic function of sets, and then we explain the lower and upper approximations as the colinear map and linear map of sets, respectively. Finally, we define the rough sets over fuzzy lattices, which cover the rough set and fuzzy rough set, and the independent axiomatic systems are constructed to characterize the lower and upper approximations of rough set over fuzzy lattices, respectively, based on inner and outer products. The axiomatic systems unify the axiomization of Pawlak's rough sets and fuzzy rough sets.
Maximizing information exchange between complex networks
West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo
2008-10-01
Science is not merely the smooth progressive interaction of hypothesis, experiment and theory, although it sometimes has that form. More realistically the scientific study of any given complex phenomenon generates a number of explanations, from a variety of perspectives, that eventually requires synthesis to achieve a deep level of insight and understanding. One such synthesis has created the field of out-of-equilibrium statistical physics as applied to the understanding of complex dynamic networks. Over the past forty years the concept of complexity has undergone a metamorphosis. Complexity was originally seen as a consequence of memory in individual particle trajectories, in full agreement with a Hamiltonian picture of microscopic dynamics and, in principle, macroscopic dynamics could be derived from the microscopic Hamiltonian picture. The main difficulty in deriving macroscopic dynamics from microscopic dynamics is the need to take into account the actions of a very large number of components. The existence of events such as abrupt jumps, considered by the conventional continuous time random walk approach to describing complexity was never perceived as conflicting with the Hamiltonian view. Herein we review many of the reasons why this traditional Hamiltonian view of complexity is unsatisfactory. We show that as a result of technological advances, which make the observation of single elementary events possible, the definition of complexity has shifted from the conventional memory concept towards the action of non-Poisson renewal events. We show that the observation of crucial processes, such as the intermittent fluorescence of blinking quantum dots as well as the brain’s response to music, as monitored by a set of electrodes attached to the scalp, has forced investigators to go beyond the traditional concept of complexity and to establish closer contact with the nascent field of complex networks. Complex networks form one of the most challenging areas of
Maximizing information exchange between complex networks
West, Bruce J. [Mathematical and Information Science, Army Research Office, Research Triangle Park, NC 27708 (United States); Physics Department, Duke University, Durham, NC 27709 (United States)], E-mail: bwest@nc.rr.com; Geneston, Elvis L. [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Physics Department, La Sierra University, 4500 Riverwalk Parkway, Riverside, CA 92515 (United States); Grigolini, Paolo [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Istituto di Processi Chimico Fisici del CNR, Area della Ricerca di Pisa, Via G. Moruzzi, 56124, Pisa (Italy); Dipartimento di Fisica ' E. Fermi' Universita' di Pisa, Largo Pontecorvo 3, 56127 Pisa (Italy)
2008-10-15
Science is not merely the smooth progressive interaction of hypothesis, experiment and theory, although it sometimes has that form. More realistically the scientific study of any given complex phenomenon generates a number of explanations, from a variety of perspectives, that eventually requires synthesis to achieve a deep level of insight and understanding. One such synthesis has created the field of out-of-equilibrium statistical physics as applied to the understanding of complex dynamic networks. Over the past forty years the concept of complexity has undergone a metamorphosis. Complexity was originally seen as a consequence of memory in individual particle trajectories, in full agreement with a Hamiltonian picture of microscopic dynamics and, in principle, macroscopic dynamics could be derived from the microscopic Hamiltonian picture. The main difficulty in deriving macroscopic dynamics from microscopic dynamics is the need to take into account the actions of a very large number of components. The existence of events such as abrupt jumps, considered by the conventional continuous time random walk approach to describing complexity was never perceived as conflicting with the Hamiltonian view. Herein we review many of the reasons why this traditional Hamiltonian view of complexity is unsatisfactory. We show that as a result of technological advances, which make the observation of single elementary events possible, the definition of complexity has shifted from the conventional memory concept towards the action of non-Poisson renewal events. We show that the observation of crucial processes, such as the intermittent fluorescence of blinking quantum dots as well as the brain's response to music, as monitored by a set of electrodes attached to the scalp, has forced investigators to go beyond the traditional concept of complexity and to establish closer contact with the nascent field of complex networks. Complex networks form one of the most challenging areas of
Experimental Implementation of a Kochen-Specker Set of Quantum Tests
Vincenzo D’Ambrosio
2013-02-01
Full Text Available The conflict between classical and quantum physics can be identified through a series of yes-no tests on quantum systems, without it being necessary that these systems be in special quantum states. Kochen-Specker (KS sets of yes-no tests have this property and provide a quantum-versus-classical advantage that is free of the initialization problem that affects some quantum computers. Here, we report the first experimental implementation of a complete KS set that consists of 18 yes-no tests on four-dimensional quantum systems and show how to use the KS set to obtain a state-independent quantum advantage. We first demonstrate the unique power of this KS set for solving a task while avoiding the problem of state initialization. Such a demonstration is done by showing that, for 28 different quantum states encoded in the orbital-angular-momentum and polarization degrees of freedom of single photons, the KS set provides an impossible-to-beat solution. In a second experiment, we generate maximally contextual quantum correlations by performing compatible sequential measurements of the polarization and path of single photons. In this case, state independence is demonstrated for 15 different initial states. Maximum contextuality and state independence follow from the fact that the sequences of measurements project any initial quantum state onto one of the KS set’s eigenstates. Our results show that KS sets can be used for quantum-information processing and quantum computation and pave the way for future developments.
The exact maximal energy of integral circulant graphs with prime power order
Sander, J W
2011-01-01
The energy of a graph was introduced by {\\sc Gutman} in 1978 as the sum of the absolute values of the eigenvalues of its adjacency matrix. We study the energy of integral circulant graphs, also called gcd graphs, which can be characterized by their vertex count $n$ and a set $\\cal D$ of divisors of $n$ in such a way that they have vertex set $\\mathbb{Z}/n\\mathbb{Z}$ and edge set $\\{\\{a,b\\}:\\, a,b\\in\\mathbb{Z}/n\\mathbb{Z},\\, \\gcd(a-b,n)\\in {\\cal D}\\}$. Given an arbitrary prime power $p^s$, we determine all divisor sets maximising the energy of an integral circulant graph of order $p^s$. This enables us to compute the maximal energy $\\Emax{p^s}$ among all integral circulant graphs of order $p^s$.
Maximally entangled state can be a mixed state
Li, Zong-Guo; Fei, Shao-Ming; Fan, Heng; Liu, W M
2009-01-01
We present mixed maximally entangled states in d\\otimes d' (d'\\geq 2d) spaces. This result is beyond the generally accepted fact that all maximally entangled states are pure. These states possess important properties of the pure maximally entangled states in $d\\otimes d$ systems, for example, they can be used as a resource for faithful teleportation, their local distinguishability property is also the same as the pure states case. On the other hand, one advantage of these mixed maximally entangled states is that the decoherence induced by certain noisy quantum channel does not destroy their entanglement. Thus one party of these mixed states can be sent through this channel to arbitrary distance while still keeping them as a valuable resource for quantum information processing. We also propose a scheme to prepare these states and confirm their advantage in NMR physical system.
Maximizing antimalarial efficacy and the importance of dosing strategies
Beeson, James G; Boeuf, Philippe; Fowkes, Freya J I
2015-01-01
.... Without new drugs to replace artemisinins, it is essential to define dosing strategies that maximize therapeutic efficacy, limit the spread of resistance, and preserve the clinical value of ACTs...
Maximal entanglement versus entropy for mixed quantum states
Wei, T C; Goldbart, P M; Kwiat, P G; Munro, W J; Verstraete, F; Wei, Tzu-Chieh; Nemoto, Kae; Goldbart, Paul M.; Kwiat, Paul G.; Munro, William J.; Verstraete, Frank
2003-01-01
Maximally entangled mixed states are those states that, for a given mixedness, achieve the greatest possible entanglement. For two-qubit systems and for various combinations of entanglement and mixedness measures, the form of the corresponding maximally entangled mixed states is determined primarily analytically. As measures of entanglement, we consider entanglement of formation, relative entropy of entanglement, and negativity; as measures of mixedness, we consider linear and von Neumann entropies. We show that the forms of the maximally entangled mixed states can vary with the combination of (entanglement and mixedness) measures chosen. Moreover, for certain combinations, the forms of the maximally entangled mixed states can change discontinuously at a specific value of the entropy.
Carnot cycle at finite power: attainability of maximal efficiency.
Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G
2013-08-01
We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power.
Classification of conformal representations induced from the maximal cuspidal parabolic
Dobrev, V. K., E-mail: dobrev@inrne.bas.bg [Scuola Internazionale Superiore di Studi Avanzati (Italy)
2017-03-15
In the present paper we continue the project of systematic construction of invariant differential operators on the example of representations of the conformal algebra induced from the maximal cuspidal parabolic.
Secret Key Generation for a Pairwise Independent Network Model
Nitinawarat, Sirin; Barg, Alexander; Narayan, Prakash; Reznik, Alex
2010-01-01
We consider secret key generation for a "pairwise independent network" model in which every pair of terminals observes correlated sources that are independent of sources observed by all other pairs of terminals. The terminals are then allowed to communicate publicly with all such communication being observed by all the terminals. The objective is to generate a secret key shared by a given subset of terminals at the largest rate possible, with the cooperation of any remaining terminals. Secrecy is required from an eavesdropper that has access to the public interterminal communication. A (single-letter) formula for secret key capacity brings out a natural connection between the problem of secret key generation and a combinatorial problem of maximal packing of Steiner trees in an associated multigraph. An explicit algorithm is proposed for secret key generation based on a maximal packing of Steiner trees in a multigraph; the corresponding maximum rate of Steiner tree packing is thus a lower bound for the secret ...
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
Probing the deviation from maximal mixing of atmospheric neutrinos
Choubey, S; Choubey, Sandhya; Roy, Probir
2006-01-01
Pioneering atmospheric muon neutrino experiments have demonstrated the near-maximal magnitude of the flavor mixing angle $\\theta_{23}$. But the precise value of the deviation $D \\equiv 1/2 - \\sin^2 \\theta_{23}$ from maximality (if nonzero) needs to be known, being of great interest -- especially to builders of neutrino mass and mixing models. We quantitatively investigate in a three generation framework the feasibility of determining $D$ in a statistically significant manner from studies of the atmospheric $\
Shareholder, stakeholder-owner or broad stakeholder maximization
Mygind, Niels
2004-01-01
including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... by other stakeholders' interests. These constraints vary for dif-ferent stakeholder owners and new standards for Corporate Social Responsibility and more active political consumers will strengthen these constraints....
Poker, Gilad; Zarai, Yoram; Margaliot, Michael; Tuller, Tamir
2014-11-06
Translation is an important stage in gene expression. During this stage, macro-molecules called ribosomes travel along the mRNA strand linking amino acids together in a specific order to create a functioning protein. An important question, related to many biomedical disciplines, is how to maximize protein production. Indeed, translation is known to be one of the most energy-consuming processes in the cell, and it is natural to assume that evolution shaped this process so that it maximizes the protein production rate. If this is indeed so then one can estimate various parameters of the translation machinery by solving an appropriate mathematical optimization problem. The same problem also arises in the context of synthetic biology, namely, re-engineer heterologous genes in order to maximize their translation rate in a host organism. We consider the problem of maximizing the protein production rate using a computational model for translation-elongation called the ribosome flow model (RFM). This model describes the flow of the ribosomes along an mRNA chain of length n using a set of n first-order nonlinear ordinary differential equations. It also includes n + 1 positive parameters: the ribosomal initiation rate into the mRNA chain, and n elongation rates along the chain sites. We show that the steady-state translation rate in the RFM is a strictly concave function of its parameters. This means that the problem of maximizing the translation rate under a suitable constraint always admits a unique solution, and that this solution can be determined using highly efficient algorithms for solving convex optimization problems even for large values of n. Furthermore, our analysis shows that the optimal translation rate can be computed based only on the optimal initiation rate and the elongation rate of the codons near the beginning of the ORF. We discuss some applications of the theoretical results to synthetic biology, molecular evolution, and functional genomics.
Evolution of Shanghai STOCK Market Based on Maximal Spanning Trees
Yang, Chunxia; Shen, Ying; Xia, Bingying
2013-01-01
In this paper, using a moving window to scan through every stock price time series over a period from 2 January 2001 to 11 March 2011 and mutual information to measure the statistical interdependence between stock prices, we construct a corresponding weighted network for 501 Shanghai stocks in every given window. Next, we extract its maximal spanning tree and understand the structure variation of Shanghai stock market by analyzing the average path length, the influence of the center node and the p-value for every maximal spanning tree. A further analysis of the structure properties of maximal spanning trees over different periods of Shanghai stock market is carried out. All the obtained results indicate that the periods around 8 August 2005, 17 October 2007 and 25 December 2008 are turning points of Shanghai stock market, at turning points, the topology structure of the maximal spanning tree changes obviously: the degree of separation between nodes increases; the structure becomes looser; the influence of the center node gets smaller, and the degree distribution of the maximal spanning tree is no longer a power-law distribution. Lastly, we give an analysis of the variations of the single-step and multi-step survival ratios for all maximal spanning trees and find that two stocks are closely bonded and hard to be broken in a short term, on the contrary, no pair of stocks remains closely bonded for a long time.