Distributed-Memory Fast Maximal Independent Set
Energy Technology Data Exchange (ETDEWEB)
Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew
2017-09-13
The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluate their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.
An application of the maximal independent set algorithm to course ...
African Journals Online (AJOL)
In this paper, we demonstrated one of the many applications of the Maximal Independent Set Algorithm in the area of course allocation. A program was developed in Pascal and used in implementing a modified version of the algorithm to assign teaching courses to available lecturers in any academic environment and it ...
Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks
Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.
Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.
Maximal independent set graph partitions for representations of body-centered cubic lattices
DEFF Research Database (Denmark)
Erleben, Kenny
2009-01-01
corresponding to the leaves of a quad-tree thus has a smaller memory foot-print. The adjacency information in the graph relieves one from going up and down the quad-tree when searching for neighbors. This results in constant time complexities for refinement and coarsening operations.......A maximal independent set graph data structure for a body-centered cubic lattice is presented. Refinement and coarsening operations are defined in terms of set-operations resulting in robust and easy implementation compared to a quad-tree-based implementation. The graph only stores information...
Lawther, R
2018-01-01
In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...
DEFF Research Database (Denmark)
Lisonek, Petr
1996-01-01
A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...
Maximal indecomposable past sets and event horizons
International Nuclear Information System (INIS)
Krolak, A.
1984-01-01
The existence of maximal indecomposable past sets MIPs is demonstrated using the Kuratowski-Zorn lemma. A criterion for the existence of an absolute event horizon in space-time is given in terms of MIPs and a relation to black hole event horizon is shown. (author)
Definable maximal discrete sets in forcing extensions
DEFF Research Database (Denmark)
Törnquist, Asger Dag; Schrittesser, David
2018-01-01
Let be a Σ11 binary relation, and recall that a set A is -discrete if no two elements of A are related by . We show that in the Sacks and Miller forcing extensions of L there is a Δ12 maximal -discrete set. We use this to answer in the negative the main question posed in [5] by showing...
Maximizing Success by Work Setting Diagnosis.
Sturner, William F.
1990-01-01
This article confronts the tension between creative expression and organizational realities in the workplace. It offers tips on diagnosing such components of the work setting as personal potential and ambition, supervisors' roles, colleagues' roles, organizational culture, and other variables that may influence the success of one's innovations in…
Maximal lattice free bodies, test sets and the Frobenius problem
DEFF Research Database (Denmark)
Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt
Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral m...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....
Directory of Open Access Journals (Sweden)
Abbas Asadi
2016-01-01
Conclusions: Although both plyometric training methods improved lower body maximal-intensity exercise performance, the traditional sets methods resulted in greater adaptations in sprint performance, while the cluster sets method resulted in greater jump and agility adaptations.
Minimal Blocking Sets in PG(2, 8) and Maximal Partial Spreads in PG(3, 8)
DEFF Research Database (Denmark)
Barat, Janos
2004-01-01
We prove that PG(2, 8) does not contain minimal blocking sets of size 14. Using this result we prove that 58 is the largest size for a maximal partial spread of PG(3, 8). This supports the conjecture that q2-q+ 2 is the largest size for a maximal partial spread of PG(3, q), q>7....
Reconfiguring Independent Sets in Claw-Free Graphs
Bonsma, P.S.; Kamiński, Marcin; Wrochna, Marcin; Ravi, R.; Gørtz, Inge Li
We present a polynomial-time algorithm that, given two independent sets in a claw-free graph G, decides whether one can be transformed into the other by a sequence of elementary steps. Each elementary step is to remove a vertex v from the current independent set S and to add a new vertex w (not in
The number of independent sets in unicyclic graphs
DEFF Research Database (Denmark)
Pedersen, Anders Sune; Vestergaard, Preben Dahl
In this paper, we determine upper and lower bounds for the number of independent sets in a unicyclic graph in terms of its order. This gives an upper bound for the number of independent sets in a connected graph which contains at least one cycle. We also determine the upper bound for the number...
Influence maximization in social networks under an independent cascade-based model
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
Sartor, F.; Vernillo, G.; de Morree, H.M.; Bonomi, A.G.; La Torre, A.; Kubis, H.P.; Veicsteinas, A.
2013-01-01
Assessment of the functional capacity of the cardiovascular system is essential in sports medicine. For athletes, the maximal oxygen uptake (V˙O2max) provides valuable information about their aerobic power. In the clinical setting, the V˙O2max provides important diagnostic and prognostic information
Distributed Large Independent Sets in One Round On Bounded-independence Graphs
Halldorsson , Magnus M.; Konrad , Christian
2015-01-01
International audience; We present a randomized one-round, single-bit messages, distributed algorithm for the maximum independent set problem in polynomially bounded-independence graphs with poly-logarithmic approximation factor. Bounded-independence graphs capture various models of wireless networks such as the unit disc graphs model and the quasi unit disc graphs model. For instance, on unit disc graphs, our achieved approximation ratio is O((log(n)/log(log(n)))^2).A starting point of our w...
Tutte sets in graphs I: Maximal tutte sets and D-graphs
Bauer, D.; Broersma, Haitze J.; Morgana, A.; Schmeichel, E.
A well-known formula of Tutte and Berge expresses the size of a maximum matching in a graph $G$ in terms of what is usually called the deficiency of $G$. A subset $X$ of $V(G)$ for which this deficiency is attained is called a Tutte set of $G$. While much is known about maximum matchings, less is
A Comparison of Heuristics with Modularity Maximization Objective using Biological Data Sets
Directory of Open Access Journals (Sweden)
Pirim Harun
2016-01-01
Full Text Available Finding groups of objects exhibiting similar patterns is an important data analytics task. Many disciplines have their own terminologies such as cluster, group, clique, community etc. defining the similar objects in a set. Adopting the term community, many exact and heuristic algorithms are developed to find the communities of interest in available data sets. Here, three heuristic algorithms to find communities are compared using five gene expression data sets. The heuristics have a common objective function of maximizing the modularity that is a quality measure of a partition and a reflection of objects’ relevance in communities. Partitions generated by the heuristics are compared with the real ones using the adjusted rand index, one of the most commonly used external validation measures. The paper discusses the results of the partitions on the mentioned biological data sets.
DEFF Research Database (Denmark)
Wone, B W M; Madsen, Per; Donovan, E R
2015-01-01
Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection...... on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols...... and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR...
Testing the statistical compatibility of independent data sets
International Nuclear Information System (INIS)
Maltoni, M.; Schwetz, T.
2003-01-01
We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ 2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed
Sartor, Francesco; Vernillo, Gianluca; de Morree, Helma M; Bonomi, Alberto G; La Torre, Antonio; Kubis, Hans-Peter; Veicsteinas, Arsenio
2013-09-01
Assessment of the functional capacity of the cardiovascular system is essential in sports medicine. For athletes, the maximal oxygen uptake [Formula: see text] provides valuable information about their aerobic power. In the clinical setting, the (VO(2max)) provides important diagnostic and prognostic information in several clinical populations, such as patients with coronary artery disease or heart failure. Likewise, VO(2max) assessment can be very important to evaluate fitness in asymptomatic adults. Although direct determination of [VO(2max) is the most accurate method, it requires a maximal level of exertion, which brings a higher risk of adverse events in individuals with an intermediate to high risk of cardiovascular problems. Estimation of VO(2max) during submaximal exercise testing can offer a precious alternative. Over the past decades, many protocols have been developed for this purpose. The present review gives an overview of these submaximal protocols and aims to facilitate appropriate test selection in sports, clinical, and home settings. Several factors must be considered when selecting a protocol: (i) The population being tested and its specific needs in terms of safety, supervision, and accuracy and repeatability of the VO(2max) estimation. (ii) The parameters upon which the prediction is based (e.g. heart rate, power output, rating of perceived exertion [RPE]), as well as the need for additional clinically relevant parameters (e.g. blood pressure, ECG). (iii) The appropriate test modality that should meet the above-mentioned requirements should also be in line with the functional mobility of the target population, and depends on the available equipment. In the sports setting, high repeatability is crucial to track training-induced seasonal changes. In the clinical setting, special attention must be paid to the test modality, because multiple physiological parameters often need to be measured during test execution. When estimating VO(2max), one has
Directory of Open Access Journals (Sweden)
David A Milder
Full Text Available The repetitive discharges required to produce a sustained muscle contraction results in activity-dependent hyperpolarization of the motor axons and a reduction in the force-generating capacity of the muscle. We investigated the relationship between these changes in the adductor pollicis muscle and the motor axons of its ulnar nerve supply, and the reproducibility of these changes. Ten subjects performed a 1-min maximal voluntary contraction. Activity-dependent changes in axonal excitability were measured using threshold tracking with electrical stimulation at the wrist; changes in the muscle were assessed as evoked and voluntary electromyography (EMG and isometric force. Separate components of axonal excitability and muscle properties were tested at 5 min intervals after the sustained contraction in 5 separate sessions. The current threshold required to produce the target muscle action potential increased immediately after the contraction by 14.8% (p<0.05, reflecting decreased axonal excitability secondary to hyperpolarization. This was not correlated with the decline in amplitude of muscle force or evoked EMG. A late reversal in threshold current after the initial recovery from hyperpolarization peaked at -5.9% at ∼35 min (p<0.05. This pattern was mirrored by other indices of axonal excitability revealing a previously unreported depolarization of motor axons in the late recovery period. Measures of axonal excitability were relatively stable at rest but less so after sustained activity. The coefficient of variation (CoV for threshold current increase was higher after activity (CoV 0.54, p<0.05 whereas changes in voluntary (CoV 0.12 and evoked twitch (CoV 0.15 force were relatively stable. These results demonstrate that activity-dependent changes in motor axon excitability are unlikely to contribute to concomitant changes in the muscle after sustained activity in healthy people. The variability in axonal excitability after sustained activity
Reliability analysis of a sensitive and independent stabilometry parameter set.
Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.
Reliability analysis of a sensitive and independent stabilometry parameter set
Nagymáté, Gergely; Orlovits, Zsanett
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938
Fotopoulou, Christina; Jones, Benjamin P; Savvatis, Konstantinos; Campbell, Jeremy; Kyrgiou, Maria; Farthing, Alan; Brett, Stephen; Roux, Rene; Hall, Marcia; Rustin, Gordon; Gabra, Hani; Jiao, Long; Stümpfle, Richard
2016-09-01
To assess surgical morbidity and mortality of maximal effort cytoreductive surgery for disseminated epithelial ovarian cancer (EOC) in a UK tertiary center. A monocentric prospective analysis of surgical morbidity and mortality was performed for all consecutive EOC patients who underwent extensive cytoreductive surgery between 01/2013 and 12/2014. Surgical complexity was assessed by the Mayo clinic surgical complexity score (SCS). Only patients with high SCS ≥5 were included in the analysis. We evaluated 118 stage IIIC/IV patients, with a median age of 63 years (range 19-91); 47.5 % had ascites and 29 % a pleural effusion. Median duration of surgery was 247 min (range 100-540 min). Median surgical complexity score was 10 (range 5-15) consisting of bowel resection (71 %), stoma formation (13.6 %), diaphragmatic stripping/resection (67 %), liver/liver capsule resection (39 %), splenectomy (20 %), resection stomach/lesser sac (26.3 %), pleurectomy (17 %), coeliac trunk/subdiaphragmatic lymphadenectomy (8 %). Total macroscopic tumor clearance rate was 89 %. Major surgical complication rate was 18.6 % (n = 22), with a 28-day and 3-month mortality of 1.7 and 3.4 %, respectively. The anastomotic leak rate was 0.8 %; fistula/bowel perforation 3.4 %; thromboembolism 3.4 % and reoperation 4.2 %. Median intensive care unit and hospital stay were 1.7 (range 0-104) and 8 days (range 4-118), respectively. Four patients (3.3 %) failed to receive chemotherapy within the first 8 postoperative weeks. Maximal effort cytoreductive surgery for EOC is feasible within a UK setting with acceptable morbidity, low intestinal stoma rates and without clinically relevant delays to postoperative chemotherapy. Careful patient selection, and coordinated multidisciplinary effort appear to be the key for good outcome. Future evaluations should include quality of life analyses.
Wone, B W M; Madsen, P; Donovan, E R; Labocha, M K; Sears, M W; Downs, C J; Sorensen, D A; Hayes, J P
2015-04-01
Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR, and BMR was slightly, but not significantly, higher (2.5%). Compared with controls, MMR was significantly higher (5.3%) in antagonistically selected lines, and BMR was slightly, but not significantly, lower (4.2%). Analysis of breeding values revealed no positive genetic trend for elevated BMR in high-MMR lines. A weak positive genetic correlation was detected between MMR and BMR. That weak positive genetic correlation supports the aerobic capacity model for the evolution of endothermy in the sense that it fails to falsify a key model assumption. Overall, the results suggest that at least in these mice there is significant capacity for independent evolution of metabolic traits. Whether that is true in the ancestral animals that evolved endothermy remains an important but unanswered question.
Maximal translational equivalence classes of musical patterns in point-set representations
DEFF Research Database (Denmark)
Collins, Tom; Meredith, David
2013-01-01
Representing musical notes as points in pitch-time space causes repeated motives and themes to appear as translationally related patterns that often correspond to maximal translatable patterns (MTPs). However, an MTP is also often the union of a salient pattern with one or two temporally isolated...
Fotopoulou, C; Jones, BP; Savvatis, K; Campbell, J; Kyrgiou, M; Farthing, A; Brett, S; Roux, R; Hall, M; Rustin, G; Gabra, H; Jiao, L; St?mpfle, R
2016-01-01
? 2016 Springer-Verlag Berlin HeidelbergObjective: To assess surgical morbidity and mortality of maximal effort cytoreductive surgery for disseminated epithelial ovarian cancer (EOC) in a UK tertiary center. Methods/materials: A monocentric prospective analysis of surgical morbidity and mortality was performed for all consecutive EOC patients who underwent extensive cytoreductive surgery between 01/2013 and 12/2014. Surgical complexity was assessed by the Mayo clinic surgical complexity score...
Decomposing a planar graph into an independent set and a 3-degenerate graph
DEFF Research Database (Denmark)
Thomassen, Carsten
2001-01-01
We prove the conjecture made by O. V. Borodin in 1976 that the vertex set of every planar graph can be decomposed into an independent set and a set inducing a 3-degenerate graph. (C) 2001 Academic Press....
Asadi, Abbas; Ramírez-Campillo, Rodrigo
2016-01-01
The aim of this study was to compare the effects of 6-week cluster versus traditional plyometric training sets on jumping ability, sprint and agility performance. Thirteen college students were assigned to a cluster sets group (N=6) or traditional sets group (N=7). Both training groups completed the same training program. The traditional group completed five sets of 20 repetitions with 2min of rest between sets each session, while the cluster group completed five sets of 20 [2×10] repetitions with 30/90-s rest each session. Subjects were evaluated for countermovement jump (CMJ), standing long jump (SLJ), t test, 20-m and 40-m sprint test performance before and after the intervention. Both groups had similar improvements (Psets methods resulted in greater adaptations in sprint performance, while the cluster sets method resulted in greater jump and agility adaptations. Copyright © 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.
Nath, Abhigyan; Subbiah, Karthikeyan
2015-12-01
Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance
Maximizing the Lifetime of Wireless Sensor Networks Using Multiple Sets of Rendezvous
Directory of Open Access Journals (Sweden)
Bo Li
2015-01-01
Full Text Available In wireless sensor networks (WSNs, there is a “crowded center effect” where the energy of nodes located near a data sink drains much faster than other nodes resulting in a short network lifetime. To mitigate the “crowded center effect,” rendezvous points (RPs are used to gather data from other nodes. In order to prolong the lifetime of WSN further, we propose using multiple sets of RPs in turn to average the energy consumption of the RPs. The problem is how to select the multiple sets of RPs and how long to use each set of RPs. An optimal algorithm and a heuristic algorithm are proposed to address this problem. The optimal algorithm is highly complex and only suitable for small scale WSN. The performance of the proposed algorithms is evaluated through simulations. The simulation results indicate that the heuristic algorithm approaches the optimal one and that using multiple RP sets can significantly prolong network lifetime.
Decomposing a planar graph of girth 5 into an independent set and a forest
DEFF Research Database (Denmark)
Kawarabayashi, Ken-ichi; Thomassen, Carsten
2009-01-01
We use a list-color technique to extend the result of Borodin and Glebov that the vertex set of every planar graph of girth at least 5 can be partitioned into an independent set and a set which induces a forest. We apply this extension to also extend Grötzsch's theorem that every planar triangle-...
Zhang, Zhengfang; Chen, Weifeng
2018-05-01
Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.
International Nuclear Information System (INIS)
Kim, Yongbok; Trombetta, Mark G.
2011-01-01
Purpose: An objective method was proposed and compared with a manual selection method to determine planner-independent skin and rib maximal dose in balloon-based high dose rate (HDR) brachytherapy planning. Methods: The maximal dose to skin and rib was objectively extracted from a dose volume histogram (DVH) of skin and rib volumes. A virtual skin volume was produced by expanding the skin surface in three dimensions (3D) external to the breast with a certain thickness in the planning computed tomography (CT) images. Therefore, the maximal dose to this volume occurs on the skin surface the same with a conventional manual selection method. The rib was also delineated in the planning CT images and its maximal dose was extracted from its DVH. The absolute (Abdiff=|D max Man -D max DVH |) and relative (Rediff[%]=100x(|D max Man -D max DVH |)/D max DVH ) maximal skin and rib dose differences between the manual selection method (D max Man ) and the objective method (D max DVH ) were measured for 50 balloon-based HDR (25 MammoSite and 25 Contura) patients. Results: The average±standard deviation of maximal dose difference was 1.67%±1.69% of the prescribed dose (PD). No statistical difference was observed between MammoSite and Contura patients for both Abdiff and Rediff[%] values. However, a statistically significant difference (p value max >90%) compared with lower dose range (D max <90%): 2.16%±1.93% vs 1.19%±1.25% with p value of 0.0049. However, the Rediff[%] analysis eliminated the inverse square factor and there was no statistically significant difference (p value=0.8931) between high and low dose ranges. Conclusions: The objective method using volumetric information of skin and rib can determine the planner-independent maximal dose compared with the manual selection method. However, the difference was <2% of PD, on average, if appropriate attention is paid to selecting a manual dose point in 3D planning CT images.
International Nuclear Information System (INIS)
Pal, Karoly F.; Vertesi, Tamas
2010-01-01
The I 3322 inequality is the simplest bipartite two-outcome Bell inequality beyond the Clauser-Horne-Shimony-Holt (CHSH) inequality, consisting of three two-outcome measurements per party. In the case of the CHSH inequality the maximal quantum violation can already be attained with local two-dimensional quantum systems; however, there is no such evidence for the I 3322 inequality. In this paper a family of measurement operators and states is given which enables us to attain the maximum quantum value in an infinite-dimensional Hilbert space. Further, it is conjectured that our construction is optimal in the sense that measuring finite-dimensional quantum systems is not enough to achieve the true quantum maximum. We also describe an efficient iterative algorithm for computing quantum maximum of an arbitrary two-outcome Bell inequality in any given Hilbert space dimension. This algorithm played a key role in obtaining our results for the I 3322 inequality, and we also applied it to improve on our previous results concerning the maximum quantum violation of several bipartite two-outcome Bell inequalities with up to five settings per party.
Directory of Open Access Journals (Sweden)
Grignon JS
2014-05-01
Full Text Available Jessica S Grignon,1,2 Jenny H Ledikwe,1,2 Ditsapelo Makati,2 Robert Nyangah,2 Baraedi W Sento,2 Bazghina-werq Semo1,2 1Department of Global Health, University of Washington, Seattle, WA, USA; 2International Training and Education Center for Health, Gaborone, Botswana Abstract: To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs. Keywords: human resources, health policy, health worker, HIV/AIDS, PEPFAR
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
Czech Academy of Sciences Publication Activity Database
Roubíček, Tomáš
2015-01-01
Roč. 113, January (2015), s. 33-50 ISSN 0362-546X R&D Projects: GA ČR GAP201/10/0357 Institutional support: RVO:61388998 Keywords : rate-independent processes * weak solutions * maximum-dissipation principle Subject RIV: BA - General Mathematics Impact factor: 1.125, year: 2015 http://ac.els-cdn.com/S0362546X14003101/1-s2.0-S0362546X14003101-main.pdf?_tid=c4e832ba-d4c2-11e5-8448-00000aacb35f&acdnat=1455637049_0a70d2c2e8ce52a598373a559623d776
Merrifield-simmons index and minimum number of independent sets in short trees
DEFF Research Database (Denmark)
Frendrup, Allan; Pedersen, Anders Sune; Sapozhenko, Alexander A.
2013-01-01
In Ars Comb. 84 (2007), 85-96, Pedersen and Vestergaard posed the problem of determining a lower bound for the number of independent sets in a tree of fixed order and diameter d. Asymptotically, we give here a complete solution for trees of diameter d...
An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets
DEFF Research Database (Denmark)
Nielsen, Michael Bang; Museth, Ken
2004-01-01
enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...
Scope of physician procedures independently billed by mid-level providers in the office setting.
Coldiron, Brett; Ratnarathorn, Mondhipa
2014-11-01
Mid-level providers (nurse practitioners and physician assistants) were originally envisioned to provide primary care services in underserved areas. This study details the current scope of independent procedural billing to Medicare of difficult, invasive, and surgical procedures by medical mid-level providers. To understand the scope of independent billing to Medicare for procedures performed by mid-level providers in an outpatient office setting for a calendar year. Analyses of the 2012 Medicare Physician/Supplier Procedure Summary Master File, which reflects fee-for-service claims that were paid by Medicare, for Current Procedural Terminology procedures independently billed by mid-level providers. Outpatient office setting among health care providers. The scope of independent billing to Medicare for procedures performed by mid-level providers. In 2012, nurse practitioners and physician assistants billed independently for more than 4 million procedures at our cutoff of 5000 paid claims per procedure. Most (54.8%) of these procedures were performed in the specialty area of dermatology. The findings of this study are relevant to safety and quality of care. Recently, the shortage of primary care clinicians has prompted discussion of widening the scope of practice for mid-level providers. It would be prudent to temper widening the scope of practice of mid-level providers by recognizing that mid-level providers are not solely limited to primary care, and may involve procedures for which they may not have formal training.
Pepper, Dominique J; Schomaker, Michael; Wilkinson, Robert J; de Azevedo, Virginia; Maartens, Gary
2015-01-01
Identifying those at increased risk of death during TB treatment is a priority in resource-constrained settings. We performed this study to determine predictors of mortality during TB treatment. We performed a retrospective analysis of a TB surveillance population in a high HIV prevalence area that was recorded in ETR.net (Electronic Tuberculosis Register). Adult TB cases initiated TB treatment from 2007 through 2009 in Khayelitsha, South Africa. Cox proportional hazards models were used to identify risk factors for death (after multiple imputations for missing data). Model selection was performed using Akaike's Information Criterion to obtain the most relevant predictors of death. Of 16,209 adult TB cases, 851 (5.3 %) died during TB treatment. In all TB cases, advancing age, co-infection with HIV, a prior history of TB and the presence of both pulmonary and extra-pulmonary TB were independently associated with an increasing hazard of death. In HIV-infected TB cases, advancing age and female gender were independently associated with an increasing hazard of death. Increasing CD4 counts and antiretroviral treatment during TB treatment were protective against death. In HIV-uninfected TB cases, advancing age was independently associated with death, whereas smear-positive disease was protective. We identified several independent predictors of death during TB treatment in resource-constrained settings. Our findings inform resource-constrained settings about certain subgroups of TB patients that should be targeted to improve mortality during TB treatment.
Directory of Open Access Journals (Sweden)
Tim Palmer
2015-11-01
Full Text Available Invariant Set (IS theory is a locally causal ontic theory of physics based on the Cosmological Invariant Set postulate that the universe U can be considered a deterministic dynamical system evolving precisely on a (suitably constructed fractal dynamically invariant set in U's state space. IS theory violates the Bell inequalities by violating Measurement Independence. Despite this, IS theory is not fine tuned, is not conspiratorial, does not constrain experimenter free will and does not invoke retrocausality. The reasons behind these claims are discussed in this paper. These arise from properties not found in conventional ontic models: the invariant set has zero measure in its Euclidean embedding space, has Cantor Set structure homeomorphic to the p-adic integers (p>>0 and is non-computable. In particular, it is shown that the p-adic metric encapulates the physics of the Cosmological Invariant Set postulate, and provides the technical means to demonstrate no fine tuning or conspiracy. Quantum theory can be viewed as the singular limit of IS theory when when p is set equal to infinity. Since it is based around a top-down constraint from cosmology, IS theory suggests that gravitational and quantum physics will be unified by a gravitational theory of the quantum, rather than a quantum theory of gravity. Some implications arising from such a perspective are discussed.
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.
An upper bound on the number of independent sets in a tree
DEFF Research Database (Denmark)
Pedersen, Anders Sune
2007-01-01
The main result of this paper is an upper bound on the number of independent sets in a tree in terms of the order and diameter of the tree. This new upper bound is a refinement of the bound given by Prodinger and Tichy [ Fibonacci Q., 20 (1982), no. 1, 16-21]. Finally, we give a sufficient...... condition for the new upper bound to be better thatn the upper bound given by Brigham, Chandrasekharan and Dutton [ Fibonacci Q., 31 (1993), no. 2, 98-104]....
An upper bound on the number of independent sets in a tree
DEFF Research Database (Denmark)
Vestergaard, Preben Dahl; Pedersen, Anders Sune
The main result of this paper is an upper bound on the number of independent sets in a tree in terms of the order and diameter of the tree. This new upper bound is a refinement of the bound given by Prodinger and Tichy [Fibonacci Q., 20 (1982), no. 1, 16-21]. Finally, we give a sufficient condition...... for the new upper bound to be better than the upper bound given by Brigham, Chandrasekharan and Dutton [Fibonacci Q., 31 (1993), no. 2, 98-104]....
Scudese, Estevão; Willardson, Jeffrey M; Simão, Roberto; Senna, Gilmar; de Salles, Belmiro F; Miranda, Humberto
2015-11-01
The purpose of this study was to compare different rest intervals between sets on repetition consistency and ratings of perceived exertion (RPE) during consecutive bench press sets with an absolute 3RM (3 repetition maximum) load. Sixteen trained men (23.75 ± 4.21 years; 74.63 ± 5.36 kg; 175 ± 4.64 cm; bench press relative strength: 1.44 ± 0.19 kg/kg of body mass) attended 4 randomly ordered sessions during which 5 consecutive sets of the bench press were performed with an absolute 3RM load and 1, 2, 3, or 5 minutes of rest interval between sets. The results indicated that significantly greater bench press repetitions were completed with 2, 3, and 5 minutes vs. 1-minute rest between sets (p ≤ 0.05); no significant differences were noted between the 2, 3, and 5 minutes rest conditions. For the 1-minute rest condition, performance reductions (relative to the first set) were observed commencing with the second set; whereas for the other conditions (2, 3, and 5 minutes rest), performance reductions were not evident until the third and fourth sets. The RPE values before each of the successive sets were significantly greater, commencing with the second set for the 1-minute vs. the 3 and 5 minutes rest conditions. Significant increases were also evident in RPE immediately after each set between the 1 and 5 minutes rest conditions from the second through fifth sets. These findings indicate that when utilizing an absolute 3RM load for the bench press, practitioners may prescribe a time-efficient minimum of 2 minutes rest between sets without significant impairments in repetition performance. However, lower perceived exertion levels may necessitate prescription of a minimum of 3 minutes rest between sets.
EEG-based recognition of video-induced emotions: selecting subject-independent feature set.
Kortelainen, Jukka; Seppänen, Tapio
2013-01-01
Emotions are fundamental for everyday life affecting our communication, learning, perception, and decision making. Including emotions into the human-computer interaction (HCI) could be seen as a significant step forward offering a great potential for developing advanced future technologies. While the electrical activity of the brain is affected by emotions, offers electroencephalogram (EEG) an interesting channel to improve the HCI. In this paper, the selection of subject-independent feature set for EEG-based emotion recognition is studied. We investigate the effect of different feature sets in classifying person's arousal and valence while watching videos with emotional content. The classification performance is optimized by applying a sequential forward floating search algorithm for feature selection. The best classification rate (65.1% for arousal and 63.0% for valence) is obtained with a feature set containing power spectral features from the frequency band of 1-32 Hz. The proposed approach substantially improves the classification rate reported in the literature. In future, further analysis of the video-induced EEG changes including the topographical differences in the spectral features is needed.
Directory of Open Access Journals (Sweden)
Helen Lunt
Full Text Available BACKGROUND: In research clinic settings, overweight adults undertaking HIIT (high intensity interval training improve their fitness as effectively as those undertaking conventional walking programs but can do so within a shorter time spent exercising. We undertook a randomized controlled feasibility (pilot study aimed at extending HIIT into a real world setting by recruiting overweight/obese, inactive adults into a group based activity program, held in a community park. METHODS: Participants were allocated into one of three groups. The two interventions, aerobic interval training and maximal volitional interval training, were compared with an active control group undertaking walking based exercise. Supervised group sessions (36 per intervention were held outdoors. Cardiorespiratory fitness was measured using VO2max (maximal oxygen uptake, results expressed in ml/min/kg, before and after the 12 week interventions. RESULTS: On ITT (intention to treat analyses, baseline (N = 49 and exit (N = 39 [Formula: see text]O2 was 25.3±4.5 and 25.3±3.9, respectively. Participant allocation and baseline/exit VO2max by group was as follows: Aerobic interval training N = 16, 24.2±4.8/25.6±4.8; maximal volitional interval training N = 16, 25.0±2.8/25.2±3.4; walking N = 17, 26.5±5.3/25.2±3.6. The post intervention change in VO2max was +1.01 in the aerobic interval training, -0.06 in the maximal volitional interval training and -1.03 in the walking subgroups. The aerobic interval training subgroup increased VO2max compared to walking (p = 0.03. The actual (observed, rather than prescribed time spent exercising (minutes per week, ITT analysis was 74 for aerobic interval training, 45 for maximal volitional interval training and 116 for walking (p = 0.001. On descriptive analysis, the walking subgroup had the fewest adverse events. CONCLUSIONS: In contrast to earlier studies, the improvement in cardiorespiratory fitness in a
Abutarboush, Hattan
2012-08-01
This paper presents the design of a low-profile compact printed antenna for fixed frequency and reconfigurable frequency bands. The antenna consists of a main patch, four sub-patches, and a ground plane to generate five frequency bands, at 0.92, 1.73, 1.98, 2.4, and 2.9 GHz, for different wireless systems. For the fixed-frequency design, the five individual frequency bands can be adjusted and set independently over the wide ranges of 18.78%, 22.75%, 4.51%, 11%, and 8.21%, respectively, using just one parameter of the antenna. By putting a varactor (diode) at each of the sub-patch inputs, four of the frequency bands can be controlled independently over wide ranges and the antenna has a reconfigurable design. The tunability ranges for the four bands of 0.92, 1.73, 1.98, and 2.9 GHz are 23.5%, 10.30%, 13.5%, and 3%, respectively. The fixed and reconfigurable designs are studied using computer simulation. For verification of simulation results, the two designs are fabricated and the prototypes are measured. The results show a good agreement between simulated and measured results. © 1963-2012 IEEE.
Abutarboush, Hattan; Nilavalan, Rajagopal; Cheung, Sing Wai; Nasr, Karim Medhat A
2012-01-01
This paper presents the design of a low-profile compact printed antenna for fixed frequency and reconfigurable frequency bands. The antenna consists of a main patch, four sub-patches, and a ground plane to generate five frequency bands, at 0.92, 1.73, 1.98, 2.4, and 2.9 GHz, for different wireless systems. For the fixed-frequency design, the five individual frequency bands can be adjusted and set independently over the wide ranges of 18.78%, 22.75%, 4.51%, 11%, and 8.21%, respectively, using just one parameter of the antenna. By putting a varactor (diode) at each of the sub-patch inputs, four of the frequency bands can be controlled independently over wide ranges and the antenna has a reconfigurable design. The tunability ranges for the four bands of 0.92, 1.73, 1.98, and 2.9 GHz are 23.5%, 10.30%, 13.5%, and 3%, respectively. The fixed and reconfigurable designs are studied using computer simulation. For verification of simulation results, the two designs are fabricated and the prototypes are measured. The results show a good agreement between simulated and measured results. © 1963-2012 IEEE.
Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).
Better firm performance through board independence in a two-tier setting
DEFF Research Database (Denmark)
Schøler, Finn; Holm, Claus
2013-01-01
independence, these were retrieved from different sections in the corresponding annual reports, i.e. from different notes and different parts of the management commentary. We used structured equations models (IBM SPSS, AMOS 19) to model the hypothesized relationship between board independence and performance...
International Nuclear Information System (INIS)
Korkola, James E; Waldman, Frederic M; Blaveri, Ekaterina; DeVries, Sandy; Moore, Dan H II; Hwang, E Shelley; Chen, Yunn-Yi; Estep, Anne LH; Chew, Karen L; Jensen, Ronald H
2007-01-01
Breast cancer is a heterogeneous disease, presenting with a wide range of histologic, clinical, and genetic features. Microarray technology has shown promise in predicting outcome in these patients. We profiled 162 breast tumors using expression microarrays to stratify tumors based on gene expression. A subset of 55 tumors with extensive follow-up was used to identify gene sets that predicted outcome. The predictive gene set was further tested in previously published data sets. We used different statistical methods to identify three gene sets associated with disease free survival. A fourth gene set, consisting of 21 genes in common to all three sets, also had the ability to predict patient outcome. To validate the predictive utility of this derived gene set, it was tested in two published data sets from other groups. This gene set resulted in significant separation of patients on the basis of survival in these data sets, correctly predicting outcome in 62–65% of patients. By comparing outcome prediction within subgroups based on ER status, grade, and nodal status, we found that our gene set was most effective in predicting outcome in ER positive and node negative tumors. This robust gene selection with extensive validation has identified a predictive gene set that may have clinical utility for outcome prediction in breast cancer patients
Indian Academy of Sciences (India)
Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.
Indian Academy of Sciences (India)
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...
An Independent Filter for Gene Set Testing Based on Spectral Enrichment
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in
Directory of Open Access Journals (Sweden)
Pierre Lafaye de Micheaux
2011-10-01
Full Text Available For statistical analysis of functional magnetic resonance imaging (fMRI data sets, we propose a data-driven approach based on independent component analysis (ICA implemented in a new version of the AnalyzeFMRI R package. For fMRI data sets, spatial dimension being much greater than temporal dimension, spatial ICA is the computationally tractable approach generally proposed. However, for some neuroscientific applications, temporal independence of source signals can be assumed and temporal ICA becomes then an attractive exploratory technique. In this work, we use a classical linear algebra result ensuring the tractability of temporal ICA. We report several experiments on synthetic data and real MRI data sets that demonstrate the potential interest of our R package.
Independent attacks in imperfect settings: A case for a two-way quantum key distribution scheme
International Nuclear Information System (INIS)
Shaari, J.S.; Bahari, Iskandar
2010-01-01
We review the study on a two-way quantum key distribution protocol given imperfect settings through a simple analysis of a toy model and show that it can outperform a BB84 setup. We provide the sufficient condition for this as a ratio of optimal intensities for the protocols.
Farias, Déborah de Araújo; Willardson, Jeffrey M; Paz, Gabriel A; Bezerra, Ewertton de S; Miranda, Humberto
2017-07-01
Farias, DdA, Willardson, JM, Paz, GA, Bezerra, EdS, and Miranda, H. Maximal strength performance and muscle activation for the bench press and triceps extension exercises adopting dumbbell, barbell and machine modalities over multiple sets. J Strength Cond Res 31(7): 1879-1887, 2017-The purpose of this study was to investigate muscle activation, total repetitions, and training volume for 3 bench press (BP) exercise modes (Smith machine [SMBP], barbell [BBP], and dumbbell [DBP]) that were followed by a triceps extension (TE) exercise. Nineteen trained men performed 3 testing protocols in random order, which included: (P1) SMBP + TE; (P2) BBP + TE; and (P3) DBP + TE. Each protocol involved 4 sets with a 10-repetition maximum (RM) load, immediately followed by a TE exercise that was also performed for 4 sets with a 10RM load. A 2-minute rest interval was adopted between sets and exercises. Surface electromyographic activity was assessed for the pectoralis major (PM), anterior deltoid (AD), biceps brachii (BB), and triceps brachii (TB). The results indicated that significantly higher total repetitions were achieved for the DBP (31.2 ± 3.2) vs. the BBP (27.8 ± 4.8). For the TE, significantly greater volume was achieved when this exercise was performed after the BBP (1,204.4 ± 249.4 kg) and DBP (1,216.8 ± 287.5 kg) vs. the SMBP (1,097.5 ± 193 kg). The DBP elicited significantly greater PM activity vs. the BBP. The SMBP elicited significantly greater AD activity vs. the BBP and DBP. During the different BP modes, the SMBP and BBP elicited significantly greater TB activity vs. the DBP. However, the DBP elicited significantly greater BB activity vs. the SMBP and BBP, respectively. During the succeeding TE exercise, significantly greater activity of the TB was observed when this exercise was performed after the BBP vs. the SMBP and DBP. Therefore, it seems that the variation in BP modes does influence both repetition performance and muscle activation patterns during the
Winkler, A. J.; Brovkin, V.; Myneni, R.; Alexandrov, G.
2017-12-01
Plant growth in the northern high latitudes benefits from increasing temperature (radiative effect) and CO2 fertilization as a consequence of rising atmospheric CO2 concentration. This enhanced gross primary production (GPP) is evident in large scale increase in summer time greening over the 36-year record of satellite observations. In this time period also various global ecosystem models simulate a greening trend in terms of increasing leaf area index (LAI). We also found a persistent greening trend analyzing historical simulations of Earth system models (ESM) participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). However, these models span a large range in strength of the LAI trend, expressed as sensitivity to both key environmental factors, temperature and CO2 concentration. There is also a wide spread in magnitude of the associated increase of terrestrial GPP among the ESMs, which contributes to pronounced uncertainties in projections of future climate change. Here we demonstrate that there is a linear relationship across the CMIP5 model ensemble between projected GPP changes and historical LAI sensitivity, which allows using the observed LAI sensitivity as an "emerging constraint" on GPP estimation at future CO2 concentration. This constrained estimate of future GPP is substantially higher than the traditional multi-model mean suggesting that the majority of current ESMs may be significantly underestimating carbon fixation by vegetation in NHL. We provide three independent lines of evidence in analyzing observed and simulated CO2 amplitude as well as atmospheric CO2 inversion products to arrive at the same conclusion.
A set of ligation-independent in vitro translation vectors for eukaryotic protein production
Directory of Open Access Journals (Sweden)
Endo Yaeta
2008-03-01
Full Text Available Abstract Background The last decade has brought the renaissance of protein studies and accelerated the development of high-throughput methods in all aspects of proteomics. Presently, most protein synthesis systems exploit the capacity of living cells to translate proteins, but their application is limited by several factors. A more flexible alternative protein production method is the cell-free in vitro protein translation. Currently available in vitro translation systems are suitable for high-throughput robotic protein production, fulfilling the requirements of proteomics studies. Wheat germ extract based in vitro translation system is likely the most promising method, since numerous eukaryotic proteins can be cost-efficiently synthesized in their native folded form. Although currently available vectors for wheat embryo in vitro translation systems ensure high productivity, they do not meet the requirements of state-of-the-art proteomics. Target genes have to be inserted using restriction endonucleases and the plasmids do not encode cleavable affinity purification tags. Results We designed four ligation independent cloning (LIC vectors for wheat germ extract based in vitro protein translation. In these constructs, the RNA transcription is driven by T7 or SP6 phage polymerase and two TEV protease cleavable affinity tags can be added to aid protein purification. To evaluate our improved vectors, a plant mitogen activated protein kinase was cloned in all four constructs. Purification of this eukaryotic protein kinase demonstrated that all constructs functioned as intended: insertion of PCR fragment by LIC worked efficiently, affinity purification of translated proteins by GST-Sepharose or MagneHis particles resulted in high purity kinase, and the affinity tags could efficiently be removed under different reaction conditions. Furthermore, high in vitro kinase activity testified of proper folding of the purified protein. Conclusion Four newly
Johansen, Inger; Lindbak, Morten; Stanghelle, Johan K; Brekke, Mette
2012-11-14
The optimal setting and content of primary health care rehabilitation of older people is not known. Our aim was to study independence, institutionalization, death and treatment costs 18 months after primary care rehabilitation of older people in two different settings. Eighteen months follow-up of an open, prospective study comparing the outcome of multi-disciplinary rehabilitation of older people, in a structured and intensive Primary care dedicated inpatient rehabilitation (PCDIR, n=202) versus a less structured and less intensive Primary care nursing home rehabilitation (PCNHR, n=100). 302 patients, disabled from stroke, hip-fracture, osteoarthritis and other chronic diseases, aged ≥65years, assessed to have a rehabilitation potential and being referred from general hospital or own residence. Primary: Independence, assessed by Sunnaas ADL Index(SI). Secondary: Hospital and short-term nursing home length of stay (LOS); institutionalization, measured by institutional residence rate; death; and costs of rehabilitation and care. Statistical tests: T-tests, Correlation tests, Pearson's χ2, ANCOVA, Regression and Kaplan-Meier analyses. Overall SI scores were 26.1 (SD 7.2) compared to 27.0 (SD 5.7) at the end of rehabilitation, a statistically, but not clinically significant reduction (p=0.003 95%CI(0.3-1.5)). The PCDIR patients scored 2.2points higher in SI than the PCNHR patients, adjusted for age, gender, baseline MMSE and SI scores (p=0.003, 95%CI(0.8-3.7)). Out of 49 patients staying >28 days in short-term nursing homes, PCNHR-patients stayed significantly longer than PCDIR-patients (mean difference 104.9 days, 95%CI(0.28-209.6), p=0.05). The institutionalization increased in PCNHR (from 12%-28%, p=0.001), but not in PCDIR (from 16.9%-19.3%, p= 0.45). The overall one year mortality rate was 9.6%. Average costs were substantially higher for PCNHR versus PCDIR. The difference per patient was 3528€ for rehabilitation (prehabilitation and care were 18702€ (=1
Directory of Open Access Journals (Sweden)
Johansen Inger
2012-11-01
Full Text Available Abstract Background The optimal setting and content of primary health care rehabilitation of older people is not known. Our aim was to study independence, institutionalization, death and treatment costs 18 months after primary care rehabilitation of older people in two different settings. Methods Eighteen months follow-up of an open, prospective study comparing the outcome of multi-disciplinary rehabilitation of older people, in a structured and intensive Primary care dedicated inpatient rehabilitation (PCDIR, n=202 versus a less structured and less intensive Primary care nursing home rehabilitation (PCNHR, n=100. Participants: 302 patients, disabled from stroke, hip-fracture, osteoarthritis and other chronic diseases, aged ≥65years, assessed to have a rehabilitation potential and being referred from general hospital or own residence. Outcome measures: Primary: Independence, assessed by Sunnaas ADL Index(SI. Secondary: Hospital and short-term nursing home length of stay (LOS; institutionalization, measured by institutional residence rate; death; and costs of rehabilitation and care. Statistical tests: T-tests, Correlation tests, Pearson’s χ2, ANCOVA, Regression and Kaplan-Meier analyses. Results Overall SI scores were 26.1 (SD 7.2 compared to 27.0 (SD 5.7 at the end of rehabilitation, a statistically, but not clinically significant reduction (p=0.003 95%CI(0.3-1.5. The PCDIR patients scored 2.2points higher in SI than the PCNHR patients, adjusted for age, gender, baseline MMSE and SI scores (p=0.003, 95%CI(0.8-3.7. Out of 49 patients staying >28 days in short-term nursing homes, PCNHR-patients stayed significantly longer than PCDIR-patients (mean difference 104.9 days, 95%CI(0.28-209.6, p=0.05. The institutionalization increased in PCNHR (from 12%-28%, p=0.001, but not in PCDIR (from 16.9%-19.3%, p= 0.45. The overall one year mortality rate was 9.6%. Average costs were substantially higher for PCNHR versus PCDIR. The difference per patient
Sreenath, Satyan B; Taylor, Robert J; Miller, Justin D; Ambrose, Emily C; Rawal, Rounak B; Ebert, Charles S; Senior, Brent A; Zanation, Adam M
2015-09-01
Surprisingly, little literature exists evaluating the optimal duration of antibiotic treatment in "maximal medical therapy" for chronic rhinosinusitis (CRS). As such, we investigated whether 3 weeks vs 6 weeks of antibiotic therapy resulted in significant differences in clinical response. A prospective, randomized cohort study was performed with patients assigned to 3-week or 6-week cohorts. Our primary outcome was failure of "maximal medical therapy" and surgical recommendation. Secondary outcomes included changes in pretherapy and posttherapy scores for the Rhinosinusitis Disability Index (RSDI), Chronic Sinusitis Survey (CSS), and computed tomography (CT)-based Lund-Mackay (LM) evaluation. Analyses were substratified based on presence of nasal polyps. Forty patients were randomized to the 3-week or 6-week treatment cohorts, with near-complete clinical follow-up achieved. No significant difference was found between the proportion of patients who failed medical therapy and were deemed surgical candidates between the 2 cohorts (71% vs 68%, p = 1.000). No significant difference was found in the change of RSDI or CSS scores in the 3 vs 6 weeks of treatment groups (mean ± standard error of the mean [SEM]; RSDI: 9.62 ± 4.14 vs 1.53 ± 4.01, p = 0.868; CSS: 5.75 ± 4.36 vs 9.65 ± 5.34, p = 0.573). Last, no significant difference was found in the change of LM scores (3.35 ± 1.11 vs 1.53 ± 0.81, p = 0.829). Based on this data, there is little difference in clinical outcomes between 3 weeks vs 6 weeks of antibiotic treatment as part of "maximal medical therapy" for CRS. Increased duration of antibiotic treatment theoretically may increase risk from side effects and creates higher healthcare costs. © 2015 ARS-AAOA, LLC.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Maximal Conflict Set Enumeration Algorithm Based on Locality of Petri Nets%基于Pe tri网局部性的极大冲突集枚举算法
Institute of Scientific and Technical Information of China (English)
潘理; 郑红; 刘显明; 杨勃
2016-01-01
冲突是Petri网研究的重要主题。目前Petri网冲突研究主要集中于冲突建模和冲突消解策略，而对冲突问题本身的计算复杂性却很少关注。提出Petri网的冲突集问题，并证明冲突集问题是NP（Non-deterministic Polyno-mial）完全的。提出极大冲突集动态枚举算法，该算法基于当前标识的所有极大冲突集，利用Petri网实施局部性，仅计算下一标识中受局部性影响的极大冲突集，从而避免重新枚举所有极大冲突集。该算法时间复杂度为O（m2 n），m是当前标识的极大冲突集数目，n是变迁数。最后证明自由选择网、非对称选择网的极大冲突集枚举算法复杂度可降至O（n2）。极大冲突集枚举算法研究将为Petri网冲突问题的算法求解提供理论参考。%Conflict is an essential concept in Petri net theory.The existing research focuses on the modelling and resolu-tion strategies of conflict problems,but less on the computational complexity of the problems theirselves.In this paper,we pro-pose the conflict set problem for Petri nets,and prove that the conflict set problem is NP-complete.Furthermore,we present a dynamic algorithm for the maximal conflict set enumeration.Our algorithm only computes those conflict sets that are affected by local firing,which avoids enumerating all maximal conflict sets at each marking.The algorithm needs time O(m2n)where m is the number of maximal conflict sets at the current marking and n is the number of transitions.Finally,we show that the maximal conflict set enumeration problem can be solved in O(n2)for free-choice nets and asymmetric choice nets.The results on complexity of thel conflict set problem provide a theoretical reference for solving conflict problems of Petri nets.
International Nuclear Information System (INIS)
Snijder, E.J.; Horzinek, M.C.; Spaan, W.J.
1990-01-01
By using poly(A)-selected RNA from Berne virus (BEV)-infected embryonic mule skin cells as a template, cDNA was prepared and cloned in plasmid pUC9. Recombinants covering a contiguous sequence of about 10 kilobases were identified. Northern (RNA) blot hybridizations with various restriction fragments from these clones showed that the five BEV mRNAs formed a 3'-coterminal nested set. Sequence analysis revealed the presence of four complete open reading frames of 4743, 699, 426, and 480 nucleotides, with initiation codons coinciding with the 5' ends of BEV RNAs 2 through 5, respectively. By using primer extension analysis and oligonucleotide hybridizations, RNA 5 was found to be contiguous on the consensus sequence. The transcription of BEV mRNAs was studied by means of UV mapping. BEV RNAs 1, 2, and 3 were shown to be transcribed independently, which is also likely--although not rigorously proven--for RNAs 4 and 5. Upstream of the AUG codon of each open reading frame a conserved sequence pattern was observed which is postulated to function as a core promoter sequence in subgenomic RNA transcription. In the area surrounding the core promoter region of the two most abundant subgenomic BEV RNAs, a number of homologous sequence motifs were identified
Canavati, Sara E; Quintero, Cesia E; Haller, Britt; Lek, Dysoley; Yok, Sovann; Richards, Jack S; Whittaker, Maxine Anne
2017-09-11
In a drug-resistant, malaria elimination setting like Western Cambodia, field research is essential for the development of novel anti-malarial regimens and the public health solutions necessary to monitor the spread of resistance and eliminate infection. Such field studies often face a variety of similar implementation challenges, but these are rarely captured in a systematic way or used to optimize future study designs that might overcome similar challenges. Field-based research staff often have extensive experience and can provide valuable insight regarding these issues, but their perspectives and experiences are rarely documented and seldom integrated into future research protocols. This mixed-methods analysis sought to gain an understanding of the daily challenges encountered by research field staff in the artemisinin-resistant, malaria elimination setting of Western Cambodia. In doing so, this study seeks to understand how the experiences and opinions of field staff can be captured, and used to inform future study designs. Twenty-two reports from six field-based malaria studies conducted in Western Cambodia were reviewed using content analysis to identify challenges to conducting the research. Informal Interviews, Focus Group Discussions and In-depth Interviews were also conducted among field research staff. Thematic analysis of the data was undertaken using Nvivo 9 ® software. Triangulation and critical case analysis was also used. There was a lack of formalized avenues through which field workers could report challenges experienced when conducting the malaria studies. Field research staff faced significant logistical barriers to participant recruitment and data collection, including a lack of available transportation to cover long distances, and the fact that mobile and migrant populations (MMPs) are usually excluded from studies because of challenges in follow-up. Cultural barriers to communication also hindered participant recruitment and created
Profit maximization mitigates competition
DEFF Research Database (Denmark)
Dierker, Egbert; Grodal, Birgit
1996-01-01
We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...
Al Ayubi, Soleh U; Pelletier, Alexandra; Sunthara, Gajen; Gujral, Nitin; Mittal, Vandna; Bourgeois, Fabienne C
2016-05-11
built into the app. Phase 3 involved deployment of TaskList on a clinical floor at BCH. Lastly, Phase 4 gathered the lessons learned from the pilot to refine the guideline. Fourteen practical recommendations were identified to create the BCH Mobile Application Development Guideline to safeguard custom applications in hospital BYOD settings. The recommendations were grouped into four categories: (1) authentication and authorization, (2) data management, (3) safeguarding app environment, and (4) remote enforcement. Following the guideline, the TaskList app was developed and then was piloted with an inpatient ward team. The Mobile Application Development guideline was created and used in the development of TaskList. The guideline is intended for use by developers when addressing integration with hospital information systems, deploying apps in BYOD health care settings, and meeting compliance standards, such as Health Insurance Portability and Accountability Act (HIPAA) regulations.
Taeroe, Anders; Mustapha, Walid Fayez; Stupak, Inge; Raulund-Rasmussen, Karsten
2017-07-15
Forests' potential to mitigate carbon emissions to the atmosphere is heavily debated and a key question is if forests left unmanaged to store carbon in biomass and soil provide larger carbon emission reductions than forests kept under forest management for production of wood that can substitute fossil fuels and fossil fuel intensive materials. We defined a modelling framework for calculation of the carbon pools and fluxes along the forest energy and wood product supply chains over 200 years for three forest management alternatives (FMA): 1) a traditionally managed European beech forest, as a business-as-usual case, 2) an energy poplar plantation, and 3) a set-aside forest left unmanaged for long-term storage of carbon. We calculated the cumulative net carbon emissions (CCE) and carbon parity times (CPT) of the managed forests relative to the unmanaged forest. Energy poplar generally had the lowest CCE when using coal as the reference fossil fuel. With natural gas as the reference fossil fuel, the CCE of the business-as-usual and the energy poplar was nearly equal, with the unmanaged forest having the highest CCE after 40 years. CPTs ranged from 0 to 156 years, depending on the applied model assumptions. CCE and CPT were especially sensitive to the reference fossil fuel, material alternatives to wood, forest growth rates for the three FMAs, and energy conversion efficiencies. Assumptions about the long-term steady-state levels of carbon stored in the unmanaged forest had a limited effect on CCE after 200 years. Analyses also showed that CPT was not a robust measure for ranking of carbon mitigation benefits. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Denny Meyer
2006-12-01
Full Text Available The objective of this paper is to use data from the highest level in men's tennis to assess whether there is any evidence to reject the hypothesis that the two players in a match have a constant probability of winning each set in the match. The data consists of all 4883 matches of grand slam men's singles over a 10 year period from 1995 to 2004. Each match is categorised by its sequence of win (W or loss (L (in set 1, set 2, set 3,... to the eventual winner. Thus, there are several categories of matches from WWW to LLWWW. The methodology involves fitting several probabilistic models to the frequencies of the above ten categories. One four-set category is observed to occur significantly more often than the other two. Correspondingly, a couple of the five-set categories occur more frequently than the others. This pattern is consistent when the data is split into two five-year subsets. The data provides significant statistical evidence that the probability of winning a set within a match varies from set to set. The data supports the conclusion that, at the highest level of men's singles tennis, the better player (not necessarily the winner lifts his play in certain situations at least some of the time
Munro, Emma L; Hickling, Donna F; Williams, Damian M; Bell, Jack J
2018-05-24
Skin tears cause pain, increased length of stay, increased costs, and reduced quality of life. Minimal research reports the association between skin tears, and malnutrition using robust measures of nutritional status. This study aimed to articulate the association between malnutrition and skin tears in hospital inpatients using a yearly point prevalence of inpatients included in the Queensland Patient Safety Bedside Audit, malnutrition audits and skin tear audits conducted at a metropolitan tertiary hospital between 2010 and 2015. Patients were excluded if admitted to mental health wards or were <18 years. A total of 2197 inpatients were included, with a median age of 71 years. The overall prevalence of skin tears was 8.1%. Malnutrition prevalence was 33.5%. Univariate analysis demonstrated associations between age (P ˂ .001), body mass index (BMI) (P < .001) and malnutrition (P ˂ .001) but not gender (P = .319). Binomial logistic regression analysis modelling demonstrated that malnutrition diagnosed using the Subjective Global Assessment was independently associated with skin tear incidence (odds ratio, OR: 1.63; 95% confidence interval, CI: 1.13-2.36) and multiple skin tears (OR 2.48 [95% CI 1.37-4.50]). BMI was not independently associated with skin tears or multiple skin tears. This study demonstrated independent associations between malnutrition and skin tear prevalence and multiple skin tears. It also demonstrated the limitations of BMI as a nutritional assessment measure. © 2018 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
Maximally incompatible quantum observables
Energy Technology Data Exchange (ETDEWEB)
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Maximally incompatible quantum observables
International Nuclear Information System (INIS)
Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario
2014-01-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Schreij, D.; Theeuwes, J.; Olivers, C.N.L.
2010-01-01
Is attentional capture contingent on top-down control settings or involuntarily driven by salient stimuli? Supporting the stimulus-driven attentional capture view, Schreij, Owens, and Theeuwes (2008) found that an onset distractor caused a response delay, in spite of participants' having adopted an
Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...
Maximal combustion temperature estimation
International Nuclear Information System (INIS)
Golodova, E; Shchepakina, E
2006-01-01
This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models
Ropiquet, Anne; Li, Blaise; Hassanin, Alexandre
2009-09-01
Supermatrix and supertree are two methods for constructing a phylogenetic tree by using multiple data sets. However, these methods are not a panacea, as conflicting signals between data sets can lead to misinterpret the evolutionary history of taxa. In particular, the supermatrix approach is expected to be misleading if the species-tree signal is not dominant after the combination of the data sets. Moreover, most current supertree methods suffer from two limitations: (i) they ignore or misinterpret secondary (non-dominant) phylogenetic signals of the different data sets; and (ii) the logical basis of node robustness measures is unclear. To overcome these limitations, we propose a new approach, called SuperTRI, which is based on the branch support analyses of the independent data sets, and where the reliability of the nodes is assessed using three measures: the supertree Bootstrap percentage and two other values calculated from the separate analyses: the mean branch support (mean Bootstrap percentage or mean posterior probability) and the reproducibility index. The SuperTRI approach is tested on a data matrix including seven genes for 82 taxa of the family Bovidae (Mammalia, Ruminantia), and the results are compared to those found with the supermatrix approach. The phylogenetic analyses of the supermatrix and independent data sets were done using four methods of tree reconstruction: Bayesian inference, maximum likelihood, and unweighted and weighted maximum parsimony. The results indicate, firstly, that the SuperTRI approach shows less sensitivity to the four phylogenetic methods, secondly, that it is more accurate to interpret the relationships among taxa, and thirdly, that interesting conclusions on introgression and radiation can be drawn from the comparisons between SuperTRI and supermatrix analyses.
Utility maximization and mode of payment
Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.
2000-01-01
The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:
Maximally multipartite entangled states
Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio
2008-06-01
We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.
Directory of Open Access Journals (Sweden)
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
International Nuclear Information System (INIS)
Velichko, T.I.; Mikhailenko, S.N.; Tashkun, S.A.
2012-01-01
A set of mass-independent U mj and Δ mj parameters globally describing vibration-rotation energy levels of the CO molecule in the X 1 Σ + ground electronic state was fitted to more than 19,000 transitions of 12 C 16 O, 13 C 16 O, 14 C 16 O, 12 C 17 O, 13 C 17 O, 12 C 18 O, and 13 C 18 O isotopologues collected from the literature. The maximal values of the vibrational V and the rotational J quantum numbers included in the fit was 41 and 128, respectively. The weighted standard deviation of the fit is .66. Fitted parameters were used for calculation of Dunham coefficients Y mj for nine isotopologues 12 C 16 O, 13 C 16 O, 14 C 16 O, 12 C 17 O, 13 C 17 O, 14 C 17 O, 12 C 18 O, 13 C 18 O, and 14 C 18 O. Calculated transition frequencies based on the fitted parameters were compared with previously reported. A critical analysis of the CO HITRAN and HITEMP data is also presented.
International Nuclear Information System (INIS)
Gronau, M.
1984-01-01
Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references
IMNN: Information Maximizing Neural Networks
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.
Polarity related influence maximization in signed social networks.
Directory of Open Access Journals (Sweden)
Dong Li
Full Text Available Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.
Directory of Open Access Journals (Sweden)
Koblbauer Ian FH
2011-10-01
Full Text Available Abstract Background Patients undergoing total knee arthroplasty (TKA often experience strength deficits both pre- and post-operatively. As these deficits may have a direct impact on functional recovery, strength assessment should be performed in this patient population. For these assessments, reliable measurements should be used. This study aimed to determine the inter- and intrarater reliability of hand-held dynamometry (HHD in measuring isometric knee strength in patients awaiting TKA. Methods To determine interrater reliability, 32 patients (81.3% female were assessed by two examiners. Patients were assessed consecutively by both examiners on the same individual test dates. To determine intrarater reliability, a subgroup (n = 13 was again assessed by the examiners within four weeks of the initial testing procedure. Maximal isometric knee flexor and extensor strength were tested using a modified Citec hand-held dynamometer. Both the affected and unaffected knee were tested. Reliability was assessed using the Intraclass Correlation Coefficient (ICC. In addition, the Standard Error of Measurement (SEM and the Smallest Detectable Difference (SDD were used to determine reliability. Results In both the affected and unaffected knee, the inter- and intrarater reliability were good for knee flexors (ICC range 0.76-0.94 and excellent for knee extensors (ICC range 0.92-0.97. However, measurement error was high, displaying SDD ranges between 21.7% and 36.2% for interrater reliability and between 19.0% and 57.5% for intrarater reliability. Overall, measurement error was higher for the knee flexors than for the knee extensors. Conclusions Modified HHD appears to be a reliable strength measure, producing good to excellent ICC values for both inter- and intrarater reliability in a group of TKA patients. High SEM and SDD values, however, indicate high measurement error for individual measures. This study demonstrates that a modified HHD is appropriate to
DEFF Research Database (Denmark)
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...
Tri-maximal vs. bi-maximal neutrino mixing
International Nuclear Information System (INIS)
Scott, W.G
2000-01-01
It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects
Froeschke, John T.; Stunz, Gregory W.; Sterba-Boatwright, Blair; Wildhaber, Mark L.
2010-01-01
Using a long-term fisheries-independent data set, we tested the 'shark nursery area concept' proposed by Heupel et al. (2007) with the suggested working assumptions that a shark nursery habitat would: (1) have an abundance of immature sharks greater than the mean abundance across all habitats where they occur; (2) be used by sharks repeatedly through time (years); and (3) see immature sharks remaining within the habitat for extended periods of time. We tested this concept using young-of-the-year (age 0) and juvenile (age 1+ yr) bull sharks Carcharhinus leucas from gill-net surveys conducted in Texas bays from 1976 to 2006 to estimate the potential nursery function of 9 coastal bays. Of the 9 bay systems considered as potential nursery habitat, only Matagorda Bay satisfied all 3 criteria for young-of-the-year bull sharks. Both Matagorda and San Antonio Bays met the criteria for juvenile bull sharks. Through these analyses we examined the utility of this approach for characterizing nursery areas and we also describe some practical considerations, such as the influence of the temporal or spatial scales considered when applying the nursery role concept to shark populations.
DEFF Research Database (Denmark)
Vind, Karl
1991-01-01
A simple mathematical result characterizing a subset of a product set is proved and used to obtain additive representations of preferences. The additivity consequences of independence assumptions are obtained for preferences which are not total or transitive. This means that most of the economic ...... theory based on additive preferences - expected utility, discounted utility - has been generalized to preferences which are not total or transitive. Other economic applications of the theorem are given...
HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL
HR Division
2000-01-01
Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...
Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris
2004-01-01
The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
The Hyperuniverse Project and maximality
Friedman, Sy-David; Honzik, Radek; Ternullo, Claudio
2018-01-01
This collection documents the work of the Hyperuniverse Project which is a new approach to set-theoretic truth based on justifiable principles and which leads to the resolution of many questions independent from ZFC. The contributions give an overview of the program, illustrate its mathematical content and implications, and also discuss its philosophical assumptions. It will thus be of wide appeal among mathematicians and philosophers with an interest in the foundations of set theory. The Hyperuniverse Project was supported by the John Templeton Foundation from January 2013 until September 2015.
Knop, R. A.; Aldering, G.; Amanullah, R.; Astier, P.; Blanc, G.; Burns, M. S.; Conley, A.; Deustua, S. E.; Doi, M.; Ellis, R.; Fabbro, S.; Folatelli, G.; Fruchter, A. S.; Garavini, G.; Garmond, S.; Garton, K.; Gibbons, R.; Goldhaber, G.; Goobar, A.; Groom, D. E.; Hardin, D.; Hook, I.; Howell, D. A.; Kim, A. G.; Lee, B. C.; Lidman, C.; Mendez, J.; Nobili, S.; Nugent, P. E.; Pain, R.; Panagia, N.; Pennypacker, C. R.; Perlmutter, S.; Quimby, R.; Raux, J.; Regnault, N.; Ruiz-Lapuente, P.; Sainton, G.; Schaefer, B.; Schahmaneche, K.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Sullivan, M.; Walton, N. A.; Wang, L.; Wood-Vasey, W. M.; Yasuda, N.
2003-11-01
We report measurements of ΩM, ΩΛ, and w from 11 supernovae (SNe) at z=0.36-0.86 with high-quality light curves measured using WFPC2 on the Hubble Space Telescope (HST). This is an independent set of high-redshift SNe that confirms previous SN evidence for an accelerating universe. The high-quality light curves available from photometry on WFPC2 make it possible for these 11 SNe alone to provide measurements of the cosmological parameters comparable in statistical weight to the previous results. Combined with earlier Supernova Cosmology Project data, the new SNe yield a measurement of the mass density ΩM=0.25+0.07-0.06(statistical)+/-0.04 (identified systematics), or equivalently, a cosmological constant of ΩΛ=0.75+0.06-0.07(statistical)+/-0.04 (identified systematics), under the assumptions of a flat universe and that the dark energy equation-of-state parameter has a constant value w=-1. When the SN results are combined with independent flat-universe measurements of ΩM from cosmic microwave background and galaxy redshift distortion data, they provide a measurement of w=-1.05+0.15-0.20(statistical)+/-0.09 (identified systematic), if w is assumed to be constant in time. In addition to high-precision light-curve measurements, the new data offer greatly improved color measurements of the high-redshift SNe and hence improved host galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host galaxy extinction correction directly for individual SNe without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with P(ΩΛ>0)>0.99, a result consistent with previous and current SN analyses that rely on the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution. Based in part on
On maximal surfaces in asymptotically flat space-times
International Nuclear Information System (INIS)
Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.
1990-01-01
Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)
Paynter, D.; Weston, S. J.; Cosgrove, V. P.; Thwaites, D. I.
2018-01-01
Flattening filter free (FFF) beams have reached widespread use for clinical treatment deliveries. The usual methods for FFF beam characterisation for their quality assurance (QA) require the use of associated conventional flattened beams (cFF). Methods for QA of FFF without the need to use associated cFF beams are presented and evaluated against current methods for both FFF and cFF beams. Inflection point normalisation is evaluated against conventional methods for the determination of field size and penumbra for field sizes from 3 cm × 3 cm to 40 cm × 40cm at depths from dmax to 20 cm in water for matched and unmatched FFF beams and for cFF beams. A method for measuring symmetry in the cross plane direction is suggested and evaluated as FFF beams are insensitive to symmetry changes in this direction. Methods for characterising beam energy are evaluated and the impact of beam energy on profile shape compared to that of cFF beams. In-plane symmetry can be measured, as can cFF beams, using observed changes in profile, whereas cross-plane symmetry can be measured by acquiring profiles at collimator angles 0 and 180. Beam energy and ‘unflatness’ can be measured as with cFF beams from observed shifts in profile with changing beam energy. Normalising the inflection points of FFF beams to 55% results in an equivalent penumbra and field size measurement within 0.5 mm of conventional methods with the exception of 40 cm × 40 cm fields at a depth of 20 cm. New proposed methods are presented that make it possible to independently carry out set up and QA measurements on beam energy, flatness, symmetry and field size of an FFF beam without the need to reference to an equivalent flattened beam of the same energy. The methods proposed can also be used to carry out this QA for flattened beams, resulting in universal definitions and methods for MV beams. This is presented for beams produced by an Elekta linear accelerator, but is
Maximizing ROI (return on information)
Energy Technology Data Exchange (ETDEWEB)
McDonald, B.
2000-05-01
The role and importance of managing information are discussed, underscoring the importance by quoting from the report of the International Data Corporation, according to which Fortune 500 companies lost $ 12 billion in 1999 due to inefficiencies resulting from intellectual re-work, substandard performance , and inability to find knowledge resources. The report predicts that this figure will rise to $ 31.5 billion by 2003. Key impediments to implementing knowledge management systems are identified as : the cost and human resources requirement of deployment; inflexibility of historical systems to adapt to change; and the difficulty of achieving corporate acceptance of inflexible software products that require changes in 'normal' ways of doing business. The author recommends the use of model, document and rule-independent systems with a document centered interface (DCI), employing rapid application development (RAD) and object technologies and visual model development, which eliminate these problems, making it possible for companies to maximize their return on information (ROI), and achieve substantial savings in implementation costs.
Maximal Bell's inequality violation for non-maximal entanglement
International Nuclear Information System (INIS)
Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.
2004-01-01
Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil
Utility Maximization in Nonconvex Wireless Systems
Brehmer, Johannes
2012-01-01
This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.
Maximally Symmetric Composite Higgs Models.
Csáki, Csaba; Ma, Teng; Shu, Jing
2017-09-29
Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.
Principles of maximally classical and maximally realistic quantum ...
Indian Academy of Sciences (India)
Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...
Goldenberg, Shira M; Chettiar, Jill; Simo, Annick; Silverman, Jay G; Strathdee, Steffanie A; Montaner, Julio S G; Shannon, Kate
2014-01-01
To explore factors associated with early sex work initiation and model the independent effect of early initiation on HIV infection and prostitution arrests among adult sex workers (SWs). Baseline data (2010-2011) were drawn from a cohort of SWs who exchanged sex for money within the last month and were recruited through time location sampling in Vancouver, Canada. Analyses were restricted to adults ≥18 years old. SWs completed a questionnaire and HIV/sexually transmitted infection testing. Using multivariate logistic regression, we identified associations with early sex work initiation (prostitution arrests among adult SWs. Of 508 SWs, 193 (38.0%) reported early sex work initiation, with 78.53% primarily street-involved SWs and 21.46% off-street SWs. HIV prevalence was 11.22%, which was 19.69% among early initiates. Early initiates were more likely to be Canadian born [adjusted odds ratio (AOR): 6.8, 95% confidence interval (CI): 2.42 to 19.02], inject drugs (AOR: 1.6, 95% CI: 1.0 to 2.5), and to have worked for a manager (AOR: 2.22, 95% CI: 1.3 to 3.6) or been coerced into sex work (AOR: 2.3, 95% CI: 1.14 to 4.44). Early initiation retained an independent effect on increased risk of HIV infection (AOR: 2.5, 95% CI: 1.3 to 3.2) and prostitution arrests (AOR: 2.0, 95% CI: 1.3 to 3.2). Adolescent sex work initiation is concentrated among marginalized, drug, and street-involved SWs. Early initiation holds an independent increased effect on HIV infection and criminalization of adult SWs. Findings suggest the need for evidence-based approaches to reduce harm among adult and youth SWs.
GOLDENBERG, Shira M.; CHETTIAR, Jill; SIMO, Annick; SILVERMAN, Jay G.; STRATHDEE, Steffanie A.; MONTANER, Julio; SHANNON, Kate
2014-01-01
Objectives To explore factors associated with early sex work initiation, and model the independent effect of early initiation on HIV infection and prostitution arrests among adult sex workers (SWs). Design Baseline data (2010–2011) were drawn from a cohort of SWs who exchanged sex for money within the last month and were recruited through time-location sampling in Vancouver, Canada. Analyses were restricted to adults ≥18 years old. Methods SWs completed a questionnaire and HIV/STI testing. Using multivariate logistic regression, we identified associations with early sex work initiation (prostitution arrests among adult SWs. Results Of 508 SWs, 193 (38.0%) reported early sex work initiation, with 78.53% primarily street-involved SWs and 21.46% off-street SWs. HIV prevalence was 11.22%, which was 19.69% among early initiates. Early initiates were more likely to be Canadian-born (Adjusted Odds Ratio (AOR): 6.8, 95% Confidence Interval (CI): 2.42–19.02), inject drugs (AOR: 1.6, 95%CI: 1.0–2.5), and to have worked for a manager (AOR: 2.22, 95%CI: 1.3–3.6) or been coerced into sex work (AOR: 2.3, 95%CI: 1.14–4.44). Early initiation retained an independent effect on increased risk of HIV infection (AOR: 2.5, 95% CI: 1.3–3.2) and prostitution arrests (AOR: 2.0, 95%CI: 1.3–3.2). Conclusions Adolescent sex work initiation is concentrated among marginalized, drug and street-involved SWs. Early initiation holds an independent increased effect on HIV infection and criminalization of adult SWs. Findings suggest the need for evidence-based approaches to reduce harm among adult and youth SWs. PMID:23982660
Maximizing and customer loyalty: Are maximizers less loyal?
Directory of Open Access Journals (Sweden)
Linda Lai
2011-06-01
Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.
Implications of maximal Jarlskog invariant and maximal CP violation
International Nuclear Information System (INIS)
Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico
2001-04-01
We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)
Goldstein, Mandy; Murray, Stuart B; Griffiths, Scott; Rayner, Kathryn; Podkowka, Jessica; Bateman, Joel E; Wallis, Andrew; Thornton, Christopher E
2016-11-01
Anorexia nervosa (AN) is a severe psychiatric illness with little evidence supporting treatment in adults. Among adolescents with AN, family-based treatment (FBT) is considered first-line outpatient approach, with a growing evidence base. However, research on FBT has stemmed from specialist services in research/public health settings. This study investigated the effectiveness of FBT in a case series of adolescent AN treated in a private practice setting. Thirty-four adolescents with full or partial AN, diagnosed according to DSM-IV criteria, participated, and were assessed at pretreatment and post-treatment. Assessments included change in % expected body weight, mood, and eating pathology. Significant weight gain was observed from pretreatment to post-treatment. 45.9% of the sample demonstrated full weight restoration and a further 43.2% achieved partial weight-based remission. Missing data precluded an examination of change in mood and ED psychopathology. Effective dissemination across different service types is important to the wider availability of evidence-based treatments. These weight restoration data lend preliminary support to the implementation of FBT in real world treatment settings. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:1023-1026). © 2016 Wiley Periodicals, Inc.
Phenomenology of maximal and near-maximal lepton mixing
International Nuclear Information System (INIS)
Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.
2001-01-01
The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay
Maximal quantum Fisher information matrix
International Nuclear Information System (INIS)
Chen, Yu; Yuan, Haidong
2017-01-01
We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)
Independent component analysis: recent advances
Hyv?rinen, Aapo
2013-01-01
Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in th...
Lange, L. H.
1974-01-01
Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)
Finding Maximal Quasiperiodicities in Strings
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Pedersen, Christian N. S.
2000-01-01
of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....
Power Converters Maximize Outputs Of Solar Cell Strings
Frederick, Martin E.; Jermakian, Joel B.
1993-01-01
Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.
Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo
2016-01-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
Schwabe, Tobias
2014-07-28
Some representative density functionals are assessed for isomerization reactions in which heteroatoms are systematically substituted with heavier members of the same element group. By this, it is investigated if the functional performance depends on the elements involved, i.e. on the external potential imposed by the atomic nuclei. Special emphasis is placed on reliable theoretical reference data and the attempt to minimize basis set effects. Both issues are challenging for molecules including heavy elements. The data suggest that no general bias can be identified for the functionals under investigation except for one case - M11-L. Nevertheless, large deviations from the reference data can be found for all functional approximations in some cases. The average error range for the nine functionals in this test is 17.6 kcal mol(-1). These outliers depreciate the general reliability of density functional approximations.
Directory of Open Access Journals (Sweden)
White Brian
2010-11-01
Full Text Available Abstract Background The genus Neisseria contains two important yet very different pathogens, N. meningitidis and N. gonorrhoeae, in addition to non-pathogenic species, of which N. lactamica is the best characterized. Genomic comparisons of these three bacteria will provide insights into the mechanisms and evolution of pathogenesis in this group of organisms, which are applicable to understanding these processes more generally. Results Non-pathogenic N. lactamica exhibits very similar population structure and levels of diversity to the meningococcus, whilst gonococci are essentially recent descendents of a single clone. All three species share a common core gene set estimated to comprise around 1190 CDSs, corresponding to about 60% of the genome. However, some of the nucleotide sequence diversity within this core genome is particular to each group, indicating that cross-species recombination is rare in this shared core gene set. Other than the meningococcal cps region, which encodes the polysaccharide capsule, relatively few members of the large accessory gene pool are exclusive to one species group, and cross-species recombination within this accessory genome is frequent. Conclusion The three Neisseria species groups represent coherent biological and genetic groupings which appear to be maintained by low rates of inter-species horizontal genetic exchange within the core genome. There is extensive evidence for exchange among positively selected genes and the accessory genome and some evidence of hitch-hiking of housekeeping genes with other loci. It is not possible to define a 'pathogenome' for this group of organisms and the disease causing phenotypes are therefore likely to be complex, polygenic, and different among the various disease-associated phenotypes observed.
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2009-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Jacob, Christian P; Nguyen, Thuy Trang; Dempfle, Astrid; Heine, Monika; Windemuth-Kieselbach, Christine; Baumann, Katarina; Jacob, Florian; Prechtl, Julian; Wittlich, Maike; Herrmann, Martin J; Gross-Lesch, Silke; Lesch, Klaus-Peter; Reif, Andreas
2010-06-01
While an interactive effect of genes with adverse life events is increasingly appreciated in current concepts of depression etiology, no data are presently available on interactions between genetic and environmental (G x E) factors with respect to personality and related disorders. The present study therefore aimed to detect main effects as well as interactions of serotonergic candidate genes (coding for the serotonin transporter, 5-HTT; the serotonin autoreceptor, HTR1A; and the enzyme which synthesizes serotonin in the brain, TPH2) with the burden of life events (#LE) in two independent samples consisting of 183 patients suffering from personality disorders and 123 patients suffering from adult attention deficit/hyperactivity disorder (aADHD). Simple analyses ignoring possible G x E interactions revealed no evidence for associations of either #LE or of the considered polymorphisms in 5-HTT and TPH2. Only the G allele of HTR1A rs6295 seemed to increase the risk of emotional-dramatic cluster B personality disorders (p = 0.019, in the personality disorder sample) and to decrease the risk of anxious-fearful cluster C personality disorders (p = 0.016, in the aADHD sample). We extended the initial simple model by taking a G x E interaction term into account, since this approach may better fit the data indicating that the effect of a gene is modified by stressful life events or, vice versa, that stressful life events only have an effect in the presence of a susceptibility genotype. By doing so, we observed nominal evidence for G x E effects as well as main effects of 5-HTT-LPR and the TPH2 SNP rs4570625 on the occurrence of personality disorders. Further replication studies, however, are necessary to validate the apparent complexity of G x E interactions in disorders of human personality.
Alternative approaches to maximally supersymmetric field theories
International Nuclear Information System (INIS)
Broedel, Johannes
2010-01-01
employing the low-energy limit of string theory, and the double-soft limit relation is indeed shown to hold. However, if the modified action has E 7(7) symmetry, the single-soft scalar limit of any amplitude should vanish. This not being the case suggests that the E 7(7) symmetry is broken by the R 4 counterterm. Finally, the Grassmannian formulation of N=4 SYM is investigated in a third part of the thesis. Any amplitude in N=4 SYM theory can be expressed as a linear combination of certain infrared (IR) divergent integrals. Being known as leading singularities, the coefficients of these integrals completely determine the structure of an amplitude. From field-theory calculations it is known that the leading singularities are not independent, but are subject to a set of so-called IR equations. The alternative Grassmannian formulation is conjectured to describe the leading singularities as certain linear combinations of residues of a multidimensional complex integral. These residues are not independent but are related by generalized residue theorems (GRTs), which are multidimensional generalizations of Cauchy's theorem. Indeed, expressing the leading singularities known from field-theory calculations in terms of these residues supports the conjecture that the IR equations can be derived from GRTs. Here it is shown that GRTs in the Grassmannian formulation do not only give rise to IR equations, but to a larger set of constraints, which can be derived by considering the dual conformal anomaly of one-loop amplitudes. Explicit maps between GRTs and both, dual conformal constraints and IR equations, are deduced and discussed. (orig.)
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-02-26
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.
Directory of Open Access Journals (Sweden)
A. Garmroodi Asil
2017-09-01
To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.
Are Independent Probes Truly Independent?
Camp, Gino; Pecher, Diane; Schmidt, Henk G.; Zeelenberg, Rene
2009-01-01
The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval…
Maximizing Entropy over Markov Processes
DEFF Research Database (Denmark)
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....
Maximizing entropy over Markov processes
DEFF Research Database (Denmark)
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...
Chamaebatiaria millefolium (Torr.) Maxim.: fernbush
Nancy L. Shaw; Emerenciana G. Hurd
2008-01-01
Fernbush - Chamaebatiaria millefolium (Torr.) Maxim. - the only species in its genus, is endemic to the Great Basin, Colorado Plateau, and adjacent areas of the western United States. It is an upright, generally multistemmed, sweetly aromatic shrub 0.3 to 2 m tall. Bark of young branches is brown and becomes smooth and gray with age. Leaves are leathery, alternate,...
Adaptive maximal poisson-disk sampling on surfaces
Yan, Dongming; Wonka, Peter
2012-01-01
In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which
A note on a profit maximizing location model
S. Zhang (Shuzhong)
1997-01-01
textabstractIn this paper we discuss a locational model with a profit-maximizing objective. The model can be illustrated by the following situation. There is a set of potential customers in a given region. A firm enters the market and wants to sell a certain product to this set of customers. The
International Nuclear Information System (INIS)
Ferrandis, Javier
2005-01-01
The current experimental determination of the absolute values of the CKM elements indicates that 2 vertical bar V ub /V cb V us vertical bar =(1-z), with z given by z=0.19+/-0.14. This fact implies that irrespective of the form of the quark Yukawa matrices, the measured value of the SM CP phase β is approximately the maximum allowed by the measured absolute values of the CKM elements. This is β=(π/6-z/3) for γ=(π/3+z/3), which implies α=π/2. Alternatively, assuming that β is exactly maximal and using the experimental measurement sin(2β)=0.726+/-0.037, the phase γ is predicted to be γ=(π/2-β)=66.3 o +/-1.7 o . The maximality of β, if confirmed by near-future experiments, may give us some clues as to the origin of CP violation
Strategy to maximize maintenance operation
Espinoza, Michael
2005-01-01
This project presents a strategic analysis to maximize maintenance operations in Alcan Kitimat Works in British Columbia. The project studies the role of maintenance in improving its overall maintenance performance. It provides strategic alternatives and specific recommendations addressing Kitimat Works key strategic issues and problems. A comprehensive industry and competitive analysis identifies the industry structure and its competitive forces. In the mature aluminium industry, the bargain...
Scalable Nonlinear AUC Maximization Methods
Khalid, Majdi; Ray, Indrakshi; Chitsaz, Hamidreza
2017-01-01
The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications. It has been widely used to evaluate classification performance on heavily imbalanced data. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines because of their capability in modeling the complex nonlinear structure underlying most real world-data. However, the high training complexity renders the kernelize...
Quench dynamics of topological maximally entangled states.
Chung, Ming-Chiang; Jhu, Yi-Hao; Chen, Pochung; Mou, Chung-Yu
2013-07-17
We investigate the quench dynamics of the one-particle entanglement spectra (OPES) for systems with topologically nontrivial phases. By using dimerized chains as an example, it is demonstrated that the evolution of OPES for the quenched bipartite systems is governed by an effective Hamiltonian which is characterized by a pseudospin in a time-dependent pseudomagnetic field S(k,t). The existence and evolution of the topological maximally entangled states (tMESs) are determined by the winding number of S(k,t) in the k-space. In particular, the tMESs survive only if nontrivial Berry phases are induced by the winding of S(k,t). In the infinite-time limit the equilibrium OPES can be determined by an effective time-independent pseudomagnetic field Seff(k). Furthermore, when tMESs are unstable, they are destroyed by quasiparticles within a characteristic timescale in proportion to the system size.
FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION
Directory of Open Access Journals (Sweden)
Rahmawati Sukmaningrum
2017-04-01
Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.
Real-time topic-aware influence maximization using preprocessing.
Chen, Wei; Lin, Tian; Yang, Cheng
2016-01-01
Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.
Gap processing for adaptive maximal poisson-disk sampling
Yan, Dongming
2013-10-17
In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.
Gap processing for adaptive maximal poisson-disk sampling
Yan, Dongming; Wonka, Peter
2013-01-01
In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.
Adaptive maximal poisson-disk sampling on surfaces
Yan, Dongming
2012-01-01
In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM.
Are Independent Fiscal Institutions Really Independent?
Directory of Open Access Journals (Sweden)
Slawomir Franek
2015-08-01
Full Text Available In the last decade the number of independent fiscal institutions (known also as fiscal councils has tripled. They play an important oversight role over fiscal policy-making in democratic societies, especially as they seek to restore public finance stability in the wake of the recent financial crisis. Although common functions of such institutions include a role in analysis of fiscal policy, forecasting, monitoring compliance with fiscal rules or costing of spending proposals, their roles, resources and structures vary considerably across countries. The aim of the article is to determine the degree of independence of such institutions based on the analysis of the independence index of independent fiscal institutions. The analysis of this index values may be useful to determine the relations between the degree of independence of fiscal councils and fiscal performance of particular countries. The data used to calculate the index values will be derived from European Commission and IMF, which collect sets of information about characteristics of activity of fiscal councils.
Uncountably many maximizing measures for a dense subset of continuous functions
Shinoda, Mao
2018-05-01
Ergodic optimization aims to single out dynamically invariant Borel probability measures which maximize the integral of a given ‘performance’ function. For a continuous self-map of a compact metric space and a dense set of continuous functions, we show the existence of uncountably many ergodic maximizing measures. We also show that, for a topologically mixing subshift of finite type and a dense set of continuous functions there exist uncountably many ergodic maximizing measures with full support and positive entropy.
Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!
International Nuclear Information System (INIS)
Nutku, Yavuz
2003-01-01
Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems
Maximal energy extraction under discrete diffusive exchange
Energy Technology Data Exchange (ETDEWEB)
Hay, M. J., E-mail: hay@princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Schiff, J. [Department of Mathematics, Bar-Ilan University, Ramat Gan 52900 (Israel); Fisch, N. J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Mixtures of maximally entangled pure states
Energy Technology Data Exchange (ETDEWEB)
Flores, M.M., E-mail: mflores@nip.up.edu.ph; Galapon, E.A., E-mail: eric.galapon@gmail.com
2016-09-15
We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.
Maximizing benefits from resource development
International Nuclear Information System (INIS)
Skjelbred, B.
2002-01-01
The main objectives of Norwegian petroleum policy are to maximize the value creation for the country, develop a national oil and gas industry, and to be at the environmental forefront of long term resource management and coexistence with other industries. The paper presents a graph depicting production and net export of crude oil for countries around the world for 2002. Norway produced 3.41 mill b/d and exported 3.22 mill b/d. Norwegian petroleum policy measures include effective regulation and government ownership, research and technology development, and internationalisation. Research and development has been in five priority areas, including enhanced recovery, environmental protection, deep water recovery, small fields, and the gas value chain. The benefits of internationalisation includes capitalizing on Norwegian competency, exploiting emerging markets and the assurance of long-term value creation and employment. 5 figs
Maximizing synchronizability of duplex networks
Wei, Xiang; Emenheiser, Jeffrey; Wu, Xiaoqun; Lu, Jun-an; D'Souza, Raissa M.
2018-01-01
We study the synchronizability of duplex networks formed by two randomly generated network layers with different patterns of interlayer node connections. According to the master stability function, we use the smallest nonzero eigenvalue and the eigenratio between the largest and the second smallest eigenvalues of supra-Laplacian matrices to characterize synchronizability on various duplexes. We find that the interlayer linking weight and linking fraction have a profound impact on synchronizability of duplex networks. The increasingly large inter-layer coupling weight is found to cause either decreasing or constant synchronizability for different classes of network dynamics. In addition, negative node degree correlation across interlayer links outperforms positive degree correlation when most interlayer links are present. The reverse is true when a few interlayer links are present. The numerical results and understanding based on these representative duplex networks are illustrative and instructive for building insights into maximizing synchronizability of more realistic multiplex networks.
Enumerating all maximal frequent subtrees in collections of phylogenetic trees.
Deepak, Akshay; Fernández-Baca, David
2014-01-01
A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.
Enumerating all maximal frequent subtrees in collections of phylogenetic trees
2014-01-01
Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474
DEFF Research Database (Denmark)
Ringe, Wolf-Georg
2013-01-01
This paper re-evaluates the corporate governance concept of ‘board independence’ against the disappointing experiences during the 2007-08 financial crisis. Independent or outside directors had long been seen as an essential tool to improve the monitoring role of the board. Yet the crisis revealed...... that they did not prevent firms' excessive risk taking; further, these directors sometimes showed serious deficits in understanding the business they were supposed to control, and remained passive in addressing structural problems. A closer look reveals that under the surface of seemingly unanimous consensus...
VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS
Directory of Open Access Journals (Sweden)
Desak Putu Eka Pratiwi
2015-07-01
Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.
Maximal Regularity of the Discrete Harmonic Oscillator Equation
Directory of Open Access Journals (Sweden)
Airton Castro
2009-01-01
Full Text Available We give a representation of the solution for the best approximation of the harmonic oscillator equation formulated in a general Banach space setting, and a characterization of lp-maximal regularity—or well posedness—solely in terms of R-boundedness properties of the resolvent operator involved in the equation.
Twitch interpolation technique in testing of maximal muscle strength
DEFF Research Database (Denmark)
Bülow, P M; Nørregaard, J; Danneskiold-Samsøe, B
1993-01-01
The aim was to study the methodological aspects of the muscle twitch interpolation technique in estimating the maximal force of contraction in the quadriceps muscle utilizing commercial muscle testing equipment. Six healthy subjects participated in seven sets of experiments testing the effects...
Algorithms over partially ordered sets
DEFF Research Database (Denmark)
Baer, Robert M.; Østerby, Ole
1969-01-01
in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....
Maximizing the optical network capacity.
Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I
2016-03-06
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. © 2016 The Authors.
On Maximal Non-Disjoint Families of Subsets
Directory of Open Access Journals (Sweden)
Yu. A. Zuev
2017-01-01
Full Text Available The paper studies maximal non-disjoint families of subsets of a finite set. Non-disjointness means that any two subsets of a family have a nonempty intersection. The maximality is expressed by the fact that adding a new subset to the family cannot increase its power without violating a non-disjointness condition. Studying the properties of such families is an important section of the extreme theory of sets. Along with purely combinatorial interest, the problems considered here play an important role in informatics, anti-noise coding, and cryptography.In 1961 this problem saw the light of day in the Erdos, Ko and Rado paper, which established a maximum power of the non-disjoint family of subsets of equal power. In 1974 the Erdos and Claytman publication estimated the number of maximal non-disjoint families of subsets without involving the equality of their power. These authors failed to establish an asymptotics of the logarithm of the number of such families when the power of a basic finite set tends to infinity. However, they suggested such an asymptotics as a hypothesis. A.D. Korshunov in two publications in 2003 and 2005 established the asymptotics for the number of non-disjoint families of the subsets of arbitrary powers without maximality condition of these families.The basis for the approach used in the paper to study the families of subsets is their description in the language of Boolean functions. A one-to-one correspondence between a family of subsets and a Boolean function is established by the fact that the characteristic vectors of subsets of a family are considered to be the unit sets of a Boolean function. The main theoretical result of the paper is that the maximal non-disjoint families are in one-to-one correspondence with the monotonic self-dual Boolean functions. When estimating the number of maximal non-disjoint families, this allowed us to use the result of A.A. Sapozhenko, who established the asymptotics of the number of the
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
Cycle length maximization in PWRs using empirical core models
International Nuclear Information System (INIS)
Okafor, K.C.; Aldemir, T.
1987-01-01
The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem
Efficient Conservation in a Utility-Maximization Framework
Directory of Open Access Journals (Sweden)
Frank W. Davis
2006-06-01
Full Text Available Systematic planning for biodiversity conservation is being conducted at scales ranging from global to national to regional. The prevailing planning paradigm is to identify the minimum land allocations needed to reach specified conservation targets or maximize the amount of conservation accomplished under an area or budget constraint. We propose a more general formulation for setting conservation priorities that involves goal setting, assessing the current conservation system, developing a scenario of future biodiversity given the current conservation system, and allocating available conservation funds to alter that scenario so as to maximize future biodiversity. Under this new formulation for setting conservation priorities, the value of a site depends on resource quality, threats to resource quality, and costs. This planning approach is designed to support collaborative processes and negotiation among competing interest groups. We demonstrate these ideas with a case study of the Sierra Nevada bioregion of California.
Disk Density Tuning of a Maximal Random Packing.
Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A
2016-08-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.
Does mental exertion alter maximal muscle activation?
Directory of Open Access Journals (Sweden)
Vianney eRozand
2014-09-01
Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.
AUC-Maximizing Ensembles through Metalearning.
LeDell, Erin; van der Laan, Mark J; Petersen, Maya
2016-05-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.
On maximal massive 3D supergravity
Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K
2010-01-01
ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...
Inclusive Fitness Maximization:An Axiomatic Approach
Okasha, Samir; Weymark, John; Bossert, Walter
2014-01-01
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...
Activity versus outcome maximization in time management.
Malkoc, Selin A; Tonietto, Gabriela N
2018-04-30
Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.
On the maximal superalgebras of supersymmetric backgrounds
International Nuclear Information System (INIS)
Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan
2009-01-01
In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.
Task-oriented maximally entangled states
International Nuclear Information System (INIS)
Agrawal, Pankaj; Pradhan, B
2010-01-01
We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.
Maximally Entangled Multipartite States: A Brief Survey
International Nuclear Information System (INIS)
Enríquez, M; Wintrowicz, I; Życzkowski, K
2016-01-01
The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)
Corporate Social Responsibility and Profit Maximizing Behaviour
Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta
2005-01-01
We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
Maximal Entanglement in High Energy Physics
Directory of Open Access Journals (Sweden)
Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli
2017-11-01
Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.
Maximal reductions in the Baker-Hausdorff formula
International Nuclear Information System (INIS)
Kolsrud, M.
1992-05-01
A preliminary expression for the Baker-Hausdorff formula is found up to ninth order, i.e. a series expansion of z in terms of multiple commutators, where e x =e x e y with x and y non-commuting, up to ninth degree in x,y. By means of complete sets of linear relations between multiple commutators, maximal reduction of the number of different multiple commutators in the series is obtained. 4 refs
Competitive prices as profit-maximizing cartel prices
Houba, H.E.D.; Motchenkova, E.I.; Wen, Q.
2010-01-01
This discussion paper has resulted in a publication in Economics Letters, 114, 39-42. Even under antitrust enforcement, firms may still form a cartel in an infinitely-repeated oligopoly model when the discount factor is sufficiently close to one. We present a linear oligopoly model where the profit-maximizing cartel price converges to the competitive equilibrium price as the discount factor goes to one. We then identify a set of necessary conditions for this seemingly counter-intuitive result.
Subthalamic nucleus activity optimizes maximal effort motor responses in Parkinson's disease.
Anzak, Anam; Tan, Huiling; Pogosyan, Alek; Foltynie, Thomas; Limousin, Patricia; Zrinzo, Ludvic; Hariz, Marwan; Ashkan, Keyoumars; Bogdanovic, Marko; Green, Alexander L; Aziz, Tipu; Brown, Peter
2012-09-01
The neural substrates that enable individuals to achieve their fastest and strongest motor responses have long been enigmatic. Importantly, characterization of such activities may inform novel therapeutic strategies for patients with hypokinetic disorders, such as Parkinson's disease. Here, we ask whether the basal ganglia may play an important role, not only in the attainment of maximal motor responses under standard conditions but also in the setting of the performance enhancements known to be engendered by delivery of intense stimuli. To this end, we recorded local field potentials from deep brain stimulation electrodes implanted bilaterally in the subthalamic nuclei of 10 patients with Parkinson's disease, as they executed their fastest and strongest handgrips in response to a visual cue, which was accompanied by a brief 96-dB auditory tone on random trials. We identified a striking correlation between both theta/alpha (5-12 Hz) and high-gamma/high-frequency (55-375 Hz) subthalamic nucleus activity and force measures, which explained close to 70% of interindividual variance in maximal motor responses to the visual cue alone, when patients were ON their usual dopaminergic medication. Loud auditory stimuli were found to enhance reaction time and peak rate of development of force still further, independent of whether patients were ON or OFF l-DOPA, and were associated with increases in subthalamic nucleus power over a broad gamma range. However, the contribution of this broad gamma activity to the performance enhancements observed was only modest (≤13%). The results implicate frequency-specific subthalamic nucleus activities as substantial factors in optimizing an individual's peak motor responses at maximal effort of will, but much less so in the performance increments engendered by intense auditory stimuli.
Maximizing your return on people.
Bassi, Laurie; McMurrer, Daniel
2007-03-01
Though most traditional HR performance metrics don't predict organizational performance, alternatives simply have not existed--until now. During the past ten years, researchers Laurie Bassi and Daniel McMurrer have worked to develop a system that allows executives to assess human capital management (HCM) and to use those metrics both to predict organizational performance and to guide organizations' investments in people. The new framework is based on a core set of HCM drivers that fall into five major categories: leadership practices, employee engagement, knowledge accessibility, workforce optimization, and organizational learning capacity. By employing rigorously designed surveys to score a company on the range of HCM practices across the five categories, it's possible to benchmark organizational HCM capabilities, identify HCM strengths and weaknesses, and link improvements or back-sliding in specific HCM practices with improvements or shortcomings in organizational performance. The process requires determining a "maturity" score for each practice, based on a scale of 1 (low) to 5 (high). Over time, evolving maturity scores from multiple surveys can reveal progress in each of the HCM practices and help a company decide where to focus improvement efforts that will have a direct impact on performance. The authors draw from their work with American Standard, South Carolina's Beaufort County School District, and a bevy of financial firms to show how improving HCM scores led to increased sales, safety, academic test scores, and stock returns. Bassi and McMurrer urge HR departments to move beyond the usual metrics and begin using HCM measurement tools to gauge how well people are managed and developed throughout the organization. In this new role, according to the authors, HR can take on strategic responsibility and ensure that superior human capital management becomes central to the organization's culture.
Directory of Open Access Journals (Sweden)
Chakkrid Klin-eam
2009-01-01
Full Text Available We prove strong convergence theorems for finding a common element of the zero point set of a maximal monotone operator and the fixed point set of a hemirelatively nonexpansive mapping in a Banach space by using monotone hybrid iteration method. By using these results, we obtain new convergence results for resolvents of maximal monotone operators and hemirelatively nonexpansive mappings in a Banach space.
Bipartite Bell Inequality and Maximal Violation
International Nuclear Information System (INIS)
Li Ming; Fei Shaoming; Li-Jost Xian-Qing
2011-01-01
We present new bell inequalities for arbitrary dimensional bipartite quantum systems. The maximal violation of the inequalities is computed. The Bell inequality is capable of detecting quantum entanglement of both pure and mixed quantum states more effectively. (general)
Maximal Inequalities for Dependent Random Variables
DEFF Research Database (Denmark)
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...
Maximizing Function through Intelligent Robot Actuator Control
National Aeronautics and Space Administration — Maximizing Function through Intelligent Robot Actuator Control Successful missions to Mars and beyond will only be possible with the support of high-performance...
An ethical justification of profit maximization
DEFF Research Database (Denmark)
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....
A definition of maximal CP-violation
International Nuclear Information System (INIS)
Roos, M.
1985-01-01
The unitary matrix of quark flavour mixing is parametrized in a general way, permitting a mathematically natural definition of maximal CP violation. Present data turn out to violate this definition by 2-3 standard deviations. (orig.)
A cosmological problem for maximally symmetric supergravity
International Nuclear Information System (INIS)
German, G.; Ross, G.G.
1986-01-01
Under very general considerations it is shown that inflationary models of the universe based on maximally symmetric supergravity with flat potentials are unable to resolve the cosmological energy density (Polonyi) problem. (orig.)
Insulin resistance and maximal oxygen uptake
DEFF Research Database (Denmark)
Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans
2003-01-01
BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...
Maximal supergravities and the E10 model
International Nuclear Information System (INIS)
Kleinschmidt, Axel; Nicolai, Hermann
2006-01-01
The maximal rank hyperbolic Kac-Moody algebra e 10 has been conjectured to play a prominent role in the unification of duality symmetries in string and M theory. We review some recent developments supporting this conjecture
Gaussian maximally multipartite-entangled states
Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio
2009-12-01
We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .
Gaussian maximally multipartite-entangled states
International Nuclear Information System (INIS)
Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano
2009-01-01
We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.
Neutrino mass textures with maximal CP violation
International Nuclear Information System (INIS)
Aizawa, Ichiro; Kitabayashi, Teruyuki; Yasue, Masaki
2005-01-01
We show three types of neutrino mass textures, which give maximal CP violation as well as maximal atmospheric neutrino mixing. These textures are described by six real mass parameters: one specified by two complex flavor neutrino masses and two constrained ones and the others specified by three complex flavor neutrino masses. In each texture, we calculate mixing angles and masses, which are consistent with observed data, as well as Majorana CP phases
Why firms should not always maximize profits
Kolstad, Ivar
2006-01-01
Though corporate social responsibility (CSR) is on the agenda of most major corporations, corporate executives still largely support the view that corporations should maximize the returns to their owners. There are two lines of defence for this position. One is the Friedmanian view that maximizing owner returns is the corporate social responsibility of corporations. The other is a position voiced by many executives, that CSR and profits go together. This paper argues that the first position i...
Maximally Informative Observables and Categorical Perception
Tsiang, Elaine
2012-01-01
We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...
International Nuclear Information System (INIS)
Jarlskog, C.
1985-06-01
The structure of the quark mass matrices in the Standard Electroweak Model is investigated. The commutator of the quark mass matrices is found to provide a conventional independent measure of CP-violation. The question of maximal CP-violation is discussed. The present experimental data indicate that CP is nowhere maximally violated. (author)
Shareholder, stakeholder-owner or broad stakeholder maximization
Mygind, Niels
2004-01-01
With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...
On the maximal noise for stochastic and QCD travelling waves
International Nuclear Information System (INIS)
Peschanski, Robi
2008-01-01
Using the relation of a set of nonlinear Langevin equations to reaction-diffusion processes, we note the existence of a maximal strength of the noise for the stochastic travelling wave solutions of these equations. Its determination is obtained using the field-theoretical analysis of branching-annihilation random walks near the directed percolation transition. We study its consequence for the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation. For the related Langevin equation modeling the quantum chromodynamic nonlinear evolution of gluon density with rapidity, the physical maximal-noise limit may appear before the directed percolation transition, due to a shift in the travelling-wave speed. In this regime, an exact solution is known from a coalescence process. Universality and other open problems and applications are discussed in the outlook
Weak incidence algebra and maximal ring of quotients
Directory of Open Access Journals (Sweden)
Surjeet Singh
2004-01-01
Full Text Available Let X, X′ be two locally finite, preordered sets and let R be any indecomposable commutative ring. The incidence algebra I(X,R, in a sense, represents X, because of the well-known result that if the rings I(X,R and I(X′,R are isomorphic, then X and X′ are isomorphic. In this paper, we consider a preordered set X that need not be locally finite but has the property that each of its equivalence classes of equivalent elements is finite. Define I*(X,R to be the set of all those functions f:X×X→R such that f(x,y=0, whenever x⩽̸y and the set Sf of ordered pairs (x,y with x
Vacua of maximal gauged D=3 supergravities
International Nuclear Information System (INIS)
Fischbacher, T; Nicolai, H; Samtleben, H
2002-01-01
We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories
An information maximization model of eye movements
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
Maximizing band gaps in plate structures
DEFF Research Database (Denmark)
Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard
2006-01-01
periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated......Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... theoretically and experimentally and the issue of finite size effects is addressed....
Singularity Structure of Maximally Supersymmetric Scattering Amplitudes
DEFF Research Database (Denmark)
Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy
2014-01-01
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....
Learning curves for mutual information maximization
International Nuclear Information System (INIS)
Urbanczik, R.
2003-01-01
An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed [S. Becker and G. Hinton, Nature (London) 355, 161 (1992)]. For a generic data model, I show that in the large sample limit the structure in the data is recognized by mutual information maximization. For a more restricted model, where the networks are similar to perceptrons, I calculate the learning curves for zero-temperature Gibbs learning. These show that convergence can be rather slow, and a way of regularizing the procedure is considered
Finding Maximal Pairs with Bounded Gap
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.
1999-01-01
. In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....
Improved Algorithms OF CELF and CELF++ for Influence Maximization
Directory of Open Access Journals (Sweden)
Jiaguo Lv
2014-06-01
Full Text Available Motivated by the wide application in some fields, such as viral marketing, sales promotion etc, influence maximization has been the most important and extensively studied problem in social network. However, the most classical KK-Greedy algorithm for influence maximization is inefficient. Two major sources of the algorithm’s inefficiency were analyzed in this paper. With the analysis of algorithms CELF and CELF++, all nodes in the influenced set of u would never bring any marginal gain when a new seed u was produced. Through this optimization strategy, a lot of redundant nodes will be removed from the candidate nodes. Basing on the strategy, two improved algorithms of Lv_CELF and Lv_CELF++ were proposed in this study. To evaluate the two algorithms, the two algorithms with their benchmark algorithms of CELF and CELF++ were conducted on some real world datasets. To estimate the algorithms, influence degree and running time were employed to measure the performance and efficiency respectively. Experimental results showed that, compared with benchmark algorithms of CELF and CELF++, matching effects and higher efficiency were achieved by the new algorithms Lv_CELF and Lv_CELF++. Solutions with the proposed optimization strategy can be useful for the decisionmaking problems under the scenarios related to the influence maximization problem.
Maximizing opto‐mechanical interaction using topology optimization
DEFF Research Database (Denmark)
Gersborg, Allan Roulund; Sigmund, Ole
2011-01-01
is performed on a periodic cell and the periodic modeling of the optical and mechanical fields have been carried out using transverse electric Bloch waves and homogenization theory in a plane stress setting, respectively. Two coupling effects are included being the photoelastic effect and the geometric effect......This paper studies topology optimization of a coupled opto‐mechanical problem with the goal of finding the material layout which maximizes the optical modulation, i.e. the difference between the optical response for the mechanically deformed and undeformed configuration. The optimization...
Maximizing Team Performance: The Critical Role of the Nurse Leader.
Manges, Kirstin; Scott-Cawiezell, Jill; Ward, Marcia M
2017-01-01
Facilitating team development is challenging, yet critical for ongoing improvement across healthcare settings. The purpose of this exemplary case study is to examine the role of nurse leaders in facilitating the development of a high-performing Change Team in implementing a patient safety initiative (TeamSTEPPs) using the Tuckman Model of Group Development as a guiding framework. The case study is the synthesis of 2.5 years of critical access hospital key informant interviews (n = 50). Critical juncture points related to team development and key nurse leader actions are analyzed, suggesting that nurse leaders are essential to maximize clinical teams' performance. © 2016 Wiley Periodicals, Inc.
Maximal heat loading of electrostatic deflector's septum at the cyclotron
International Nuclear Information System (INIS)
Arzumanov, A.; Borissenko, A.
2002-01-01
An electrostatic deflector is used for extraction of accelerated particles at the isochronous cyclotron U-150 (Institute of Nuclear Physics, Kazakhstan). Efficiency of beam extraction depends on a set of factors. Decisive is heat state of the septum and essentially beam extraction is limited by beam power dissipation on the deflector. Due to the works carried on for radioisotope production, determination of septum's maximal heat loading, optimization of the septum's geometry represent the interest. Maximum heat loading of deflector's septum and it's dependence on septum's geometry and thermal-physical properties of septum's material are presented in the paper as result of numerical calculation. The obtained results are discussed
Maximizing the Range of a Projectile.
Brown, Ronald A.
1992-01-01
Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)
Robust Utility Maximization Under Convex Portfolio Constraints
International Nuclear Information System (INIS)
Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed
2015-01-01
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Reserve design to maximize species persistence
Robert G. Haight; Laurel E. Travis
2008-01-01
We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...
Maximization of eigenvalues using topology optimization
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2000-01-01
to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
A THEORY OF MAXIMIZING SENSORY INFORMATION
Hateren, J.H. van
1992-01-01
A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power
Maximizing scientific knowledge from randomized clinical trials
DEFF Research Database (Denmark)
Gustafsson, Finn; Atar, Dan; Pitt, Bertram
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly vari...
A Model of College Tuition Maximization
Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.
2009-01-01
This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…
Logit Analysis for Profit Maximizing Loan Classification
Watt, David L.; Mortensen, Timothy L.; Leistritz, F. Larry
1988-01-01
Lending criteria and loan classification methods are developed. Rating system breaking points are analyzed to present a method to maximize loan revenues. Financial characteristics of farmers are used as determinants of delinquency in a multivariate logistic model. Results indicate that debt-to-asset and operating ration are most indicative of default.
Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.
Cormie, Prue; McGuigan, Michael R; Newton, Robert U
2011-01-01
This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.
Mixed maximal and explosive strength training in recreational endurance runners.
Taipale, Ritva S; Mikkola, Jussi; Salo, Tiina; Hokka, Laura; Vesterinen, Ville; Kraemer, William J; Nummela, Ari; Häkkinen, Keijo
2014-03-01
Supervised periodized mixed maximal and explosive strength training added to endurance training in recreational endurance runners was examined during an 8-week intervention preceded by an 8-week preparatory strength training period. Thirty-four subjects (21-45 years) were divided into experimental groups: men (M, n = 9), women (W, n = 9), and control groups: men (MC, n = 7), women (WC, n = 9). The experimental groups performed mixed maximal and explosive exercises, whereas control subjects performed circuit training with body weight. Endurance training included running at an intensity below lactate threshold. Strength, power, endurance performance characteristics, and hormones were monitored throughout the study. Significance was set at p ≤ 0.05. Increases were observed in both experimental groups that were more systematic than in the control groups in explosive strength (12 and 13% in men and women, respectively), muscle activation, maximal strength (6 and 13%), and peak running speed (14.9 ± 1.2 to 15.6 ± 1.2 and 12.9 ± 0.9 to 13.5 ± 0.8 km Ł h). The control groups showed significant improvements in maximal and explosive strength, but Speak increased only in MC. Submaximal running characteristics (blood lactate and heart rate) improved in all groups. Serum hormones fluctuated significantly in men (testosterone) and in women (thyroid stimulating hormone) but returned to baseline by the end of the study. Mixed strength training combined with endurance training may be more effective than circuit training in recreational endurance runners to benefit overall fitness that may be important for other adaptive processes and larger training loads associated with, e.g., marathon training.
Understanding Violations of Gricean Maxims in Preschoolers and Adults
Directory of Open Access Journals (Sweden)
Mako eOkanda
2015-07-01
Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Outer-2-independent domination in graphs
Indian Academy of Sciences (India)
independent dominating set of a graph is a set of vertices of such that every vertex of ()\\ has a neighbor in and the maximum vertex degree of the subgraph induced by ()\\ is at most one. The outer-2-independent domination ...
A quasi-PTAS for profit-maximizing pricing on line graphs
Elbassioni, K.M.; Sitters, R.A.; Zhang, Y.; Arge, L.; Hoffmann, M.; Welzl, E.
2007-01-01
We consider the problem of pricing items so as to maximize the profit made from selling these items. An instance is given by a set E of n items and a set of m clients, where each client is specified by one subset of E (the bundle of items he/she wants to buy), and a budget (valuation), which is the
Dopaminergic balance between reward maximization and policy complexity
Directory of Open Access Journals (Sweden)
Naama eParush
2011-05-01
Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.
Automatic physical inference with information maximizing neural networks
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.
Refined reservoir description to maximize oil recovery
International Nuclear Information System (INIS)
Flewitt, W.E.
1975-01-01
To assure maximized oil recovery from older pools, reservoir description has been advanced by fully integrating original open-hole logs and the recently introduced interpretive techniques made available through cased-hole wireline saturation logs. A refined reservoir description utilizing normalized original wireline porosity logs has been completed in the Judy Creek Beaverhill Lake ''A'' Pool, a reefal carbonate pool with current potential productivity of 100,000 BOPD and 188 active wells. Continuous porosity was documented within a reef rim and cap while discontinuous porous lenses characterized an interior lagoon. With the use of pulsed neutron logs and production data a separate water front and pressure response was recognized within discrete environmental units. The refined reservoir description aided in reservoir simulation model studies and quantifying pool performance. A pattern water flood has now replaced the original peripheral bottom water drive to maximize oil recovery
Maximal frustration as an immunological principle.
de Abreu, F Vistulo; Mostardinha, P
2009-03-06
A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.
Maximizing Macromolecule Crystal Size for Neutron Diffraction Experiments
Judge, R. A.; Kephart, R.; Leardi, R.; Myles, D. A.; Snell, E. H.; vanderWoerd, M.; Curreri, Peter A. (Technical Monitor)
2002-01-01
A challenge in neutron diffraction experiments is growing large (greater than 1 cu mm) macromolecule crystals. In taking up this challenge we have used statistical experiment design techniques to quickly identify crystallization conditions under which the largest crystals grow. These techniques provide the maximum information for minimal experimental effort, allowing optimal screening of crystallization variables in a simple experimental matrix, using the minimum amount of sample. Analysis of the results quickly tells the investigator what conditions are the most important for the crystallization. These can then be used to maximize the crystallization results in terms of reducing crystal numbers and providing large crystals of suitable habit. We have used these techniques to grow large crystals of Glucose isomerase. Glucose isomerase is an industrial enzyme used extensively in the food industry for the conversion of glucose to fructose. The aim of this study is the elucidation of the enzymatic mechanism at the molecular level. The accurate determination of hydrogen positions, which is critical for this, is a requirement that neutron diffraction is uniquely suited for. Preliminary neutron diffraction experiments with these crystals conducted at the Institute Laue-Langevin (Grenoble, France) reveal diffraction to beyond 2.5 angstrom. Macromolecular crystal growth is a process involving many parameters, and statistical experimental design is naturally suited to this field. These techniques are sample independent and provide an experimental strategy to maximize crystal volume and habit for neutron diffraction studies.
Derivative pricing based on local utility maximization
Jan Kallsen
2002-01-01
This paper discusses a new approach to contingent claim valuation in general incomplete market models. We determine the neutral derivative price which occurs if investors maximize their local utility and if derivative demand and supply are balanced. We also introduce the sensitivity process of a contingent claim. This process quantifies the reliability of the neutral derivative price and it can be used to construct price bounds. Moreover, it allows to calibrate market models in order to be co...
Control of Shareholders’ Wealth Maximization in Nigeria
A. O. Oladipupo; C. O. Okafor
2014-01-01
This research focuses on who controls shareholder’s wealth maximization and how does this affect firm’s performance in publicly quoted non-financial companies in Nigeria. The shareholder fund was the dependent while explanatory variables were firm size (proxied by log of turnover), retained earning (representing management control) and dividend payment (representing measure of shareholders control). The data used for this study were obtained from the Nigerian Stock Exchange [NSE] fact book an...
Dynamic Convex Duality in Constrained Utility Maximization
Li, Yusong; Zheng, Harry
2016-01-01
In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...
Independence and Product Systems
Skeide, Michael
2003-01-01
Starting from elementary considerations about independence and Markov processes in classical probability we arrive at the new concept of conditional monotone independence (or operator-valued monotone independence). With the help of product systems of Hilbert modules we show that monotone conditional independence arises naturally in dilation theory.
Single maximal versus combination punch kinematics.
Piorkowski, Barry A; Lees, Adrian; Barton, Gabor J
2011-03-01
The aim of this study was to determine the influence of punch type (Jab, Cross, Lead Hook and Reverse Hook) and punch modality (Single maximal, 'In-synch' and 'Out of synch' combination) on punch speed and delivery time. Ten competition-standard volunteers performed punches with markers placed on their anatomical landmarks for 3D motion capture with an eight-camera optoelectronic system. Speed and duration between key moments were computed. There were significant differences in contact speed between punch types (F(2,18,84.87) = 105.76, p = 0.001) with Lead and Reverse Hooks developing greater speed than Jab and Cross. There were significant differences in contact speed between punch modalities (F(2,64,102.87) = 23.52, p = 0.001) with the Single maximal (M+/- SD: 9.26 +/- 2.09 m/s) higher than 'Out of synch' (7.49 +/- 2.32 m/s), 'In-synch' left (8.01 +/- 2.35 m/s) or right lead (7.97 +/- 2.53 m/s). Delivery times were significantly lower for Jab and Cross than Hook. Times were significantly lower 'In-synch' than a Single maximal or 'Out of synch' combination mode. It is concluded that a defender may have more evasion-time than previously reported. This research could be of use to performers and coaches when considering training preparations.
Formation Control for the MAXIM Mission
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
Gradient Dynamics and Entropy Production Maximization
Janečka, Adam; Pavelka, Michal
2018-01-01
We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.
Maximal sfermion flavour violation in super-GUTs
AUTHOR|(CDS)2108556; Velasco-Sevilla, Liliana
2016-01-01
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses $m_0$ specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses $m_{1/2}$, as is expected in no-scale models, the dominant effects of renormalization between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to $m_{1/2}$ and generation-independent. In this case, the input scalar masses $m_0$ may violate flavour maximally, a scenario we call MaxFV, and there is no supersymmetric flavour problem. We illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity.
Maximal sfermion flavour violation in super-GUTs
Energy Technology Data Exchange (ETDEWEB)
Ellis, John [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Olive, Keith A. [CERN, Theoretical Physics Department, Geneva (Switzerland); University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States); Velasco-Sevilla, L. [University of Bergen, Department of Physics and Technology, PO Box 7803, Bergen (Norway)
2016-10-15
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m{sub 0} specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m{sub 1/2}, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m{sub 1/2} and generation independent. In this case, the input scalar masses m{sub 0} may violate flavour maximally, a scenario we call MaxSFV, and there is no supersymmetric flavour problem. We illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity. (orig.)
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2011-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Cardiorespiratory Coordination in Repeated Maximal Exercise
Directory of Open Access Journals (Sweden)
Sergi Garcia-Retortillo
2017-06-01
Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC
Directory of Open Access Journals (Sweden)
Jan Karbowski
2015-10-01
Full Text Available The structure and quantitative composition of the cerebral cortex are interrelated with its computational capacity. Empirical data analyzed here indicate a certain hierarchy in local cortical composition. Specifically, neural wire, i.e., axons and dendrites take each about 1/3 of cortical space, spines and glia/astrocytes occupy each about (1/3(2, and capillaries around (1/3(4. Moreover, data analysis across species reveals that these fractions are roughly brain size independent, which suggests that they could be in some sense optimal and thus important for brain function. Is there any principle that sets them in this invariant way? This study first builds a model of local circuit in which neural wire, spines, astrocytes, and capillaries are mutually coupled elements and are treated within a single mathematical framework. Next, various forms of wire minimization rule (wire length, surface area, volume, or conduction delays are analyzed, of which, only minimization of wire volume provides realistic results that are very close to the empirical cortical fractions. As an alternative, a new principle called "spine economy maximization" is proposed and investigated, which is associated with maximization of spine proportion in the cortex per spine size that yields equally good but more robust results. Additionally, a combination of wire cost and spine economy notions is considered as a meta-principle, and it is found that this proposition gives only marginally better results than either pure wire volume minimization or pure spine economy maximization, but only if spine economy component dominates. However, such a combined meta-principle yields much better results than the constraints related solely to minimization of wire length, wire surface area, and conduction delays. Interestingly, the type of spine size distribution also plays a role, and better agreement with the data is achieved for distributions with long tails. In sum, these results suggest
General conditions for maximal violation of non-contextuality in discrete and continuous variables
International Nuclear Information System (INIS)
Laversanne-Finot, A; Ketterer, A; Coudreau, T; Keller, A; Milman, P; Barros, M R; Walborn, S P
2017-01-01
The contextuality of quantum mechanics can be shown by the violation of inequalities based on measurements of well chosen observables. An important property of such observables is that their expectation value can be expressed in terms of probabilities for obtaining two exclusive outcomes. Examples of such inequalities have been constructed using either observables with a dichotomic spectrum or using periodic functions obtained from displacement operators in phase space. Here we identify the general conditions on the spectral decomposition of observables demonstrating state independent contextuality of quantum mechanics. Our results not only unify existing strategies for maximal violation of state independent non-contextuality inequalities but also lead to new scenarios enabling such violations. Among the consequences of our results is the impossibility of having a state independent maximal violation of non-contextuality in the Peres–Mermin scenario with discrete observables of odd dimensions. (paper)
Postactivation potentiation biases maximal isometric strength assessment.
Lima, Leonardo Coelho Rabello; Oliveira, Felipe Bruno Dias; Oliveira, Thiago Pires; Assumpção, Claudio de Oliveira; Greco, Camila Coelho; Cardozo, Adalgiso Croscato; Denadai, Benedito Sérgio
2014-01-01
Postactivation potentiation (PAP) is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs). The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n = 23) performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT), time to achieve it (tPTI), contractile impulse (CI), root mean square of the electromyographic signal during PTI (RMS), and rate of torque development (RTD), in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m), RTD (746 ± 152 N·m·s(-1) versus 727 ± 158 N·m·s(-1)), and RMS (59.1 ± 12.2% RMSMAX versus 54.8 ± 9.4% RMSMAX) were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms). We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.
Gain maximization in a probabilistic entanglement protocol
di Lorenzo, Antonio; Esteves de Queiroz, Johnny Hebert
Entanglement is a resource. We can therefore define gain as a monotonic function of entanglement G (E) . If a pair with entanglement E is produced with probability P, the net gain is N = PG (E) - (1 - P) C , where C is the cost of a failed attempt. We study a protocol where a pair of quantum systems is produced in a maximally entangled state ρm with probability Pm, while it is produced in a partially entangled state ρp with the complementary probability 1 -Pm . We mix a fraction w of the partially entangled pairs with the maximally entangled ones, i.e. we take the state to be ρ = (ρm + wUlocρpUloc+) / (1 + w) , where Uloc is an appropriate unitary local operation designed to maximize the entanglement of ρ. This procedure on one hand reduces the entanglement E, and hence the gain, but on the other hand it increases the probability of success to P =Pm + w (1 -Pm) , therefore the net gain N may increase. There may be hence, a priori, an optimal value for w, the fraction of failed attempts that we mix in. We show that, in the hypothesis of a linear gain G (E) = E , even assuming a vanishing cost C -> 0 , the net gain N is increasing with w, therefore the best strategy is to always mix the partially entangled states. Work supported by CNPq, Conselho Nacional de Desenvolvimento Científico e Tecnológico, proc. 311288/2014-6, and by FAPEMIG, Fundação de Amparo à Pesquisa de Minas Gerais, proc. IC-FAPEMIG2016-0269 and PPM-00607-16.
Maximizing percentage depletion in solid minerals
International Nuclear Information System (INIS)
Tripp, J.; Grove, H.D.; McGrath, M.
1982-01-01
This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables
What currency do bumble bees maximize?
Directory of Open Access Journals (Sweden)
Nicholas L Charlton
2010-08-01
Full Text Available In modelling bumble bee foraging, net rate of energetic intake has been suggested as the appropriate currency. The foraging behaviour of honey bees is better predicted by using efficiency, the ratio of energetic gain to expenditure, as the currency. We re-analyse several studies of bumble bee foraging and show that efficiency is as good a currency as net rate in terms of predicting behaviour. We suggest that future studies of the foraging of bumble bees should be designed to distinguish between net rate and efficiency maximizing behaviour in an attempt to discover which is the more appropriate currency.
Maximizing policy learning in international committees
DEFF Research Database (Denmark)
Nedergaard, Peter
2007-01-01
, this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...
Pouliot type duality via a-maximization
International Nuclear Information System (INIS)
Kawano, Teruhiko; Ookouchi, Yutaka; Tachikawa, Yuji; Yagi, Futoshi
2006-01-01
We study four-dimensional N=1Spin(10) gauge theory with a single spinor and N Q vectors at the superconformal fixed point via the electric-magnetic duality and a-maximization. When gauge invariant chiral primary operators hit the unitarity bounds, we find that the theory with no superpotential is identical to the one with some superpotential at the infrared fixed point. The auxiliary field method in the electric theory offers a satisfying description of the infrared fixed point, which is consistent with the better picture in the magnetic theory. In particular, it gives a clear description of the emergence of new massless degrees of freedom in the electric theory
Blocking sets in Desarguesian planes
Blokhuis, A.; Miklós, D.; Sós, V.T.; Szönyi, T.
1996-01-01
We survey recent results concerning the size of blocking sets in desarguesian projective and affine planes, and implications of these results and the technique to prove them, to related problemis, such as the size of maximal partial spreads, small complete arcs, small strong representative systems
Cormie, Prue; McGuigan, Michael R; Newton, Robert U
2011-02-01
This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and
Parton Distributions based on a Maximally Consistent Dataset
Rojo, Juan
2016-04-01
The choice of data that enters a global QCD analysis can have a substantial impact on the resulting parton distributions and their predictions for collider observables. One of the main reasons for this has to do with the possible presence of inconsistencies, either internal within an experiment or external between different experiments. In order to assess the robustness of the global fit, different definitions of a conservative PDF set, that is, a PDF set based on a maximally consistent dataset, have been introduced. However, these approaches are typically affected by theory biases in the selection of the dataset. In this contribution, after a brief overview of recent NNPDF developments, we propose a new, fully objective, definition of a conservative PDF set, based on the Bayesian reweighting approach. Using the new NNPDF3.0 framework, we produce various conservative sets, which turn out to be mutually in agreement within the respective PDF uncertainties, as well as with the global fit. We explore some of their implications for LHC phenomenology, finding also good consistency with the global fit result. These results provide a non-trivial validation test of the new NNPDF3.0 fitting methodology, and indicate that possible inconsistencies in the fitted dataset do not affect substantially the global fit PDFs.
Comparison of empirical strategies to maximize GENEHUNTER lod scores.
Chen, C H; Finch, S J; Mendell, N R; Gordon, D
1999-01-01
We compare four strategies for finding the settings of genetic parameters that maximize the lod scores reported in GENEHUNTER 1.2. The four strategies are iterated complete factorial designs, iterated orthogonal Latin hypercubes, evolutionary operation, and numerical optimization. The genetic parameters that are set are the phenocopy rate, penetrance, and disease allele frequency; both recessive and dominant models are considered. We selected the optimization of a recessive model on the Collaborative Study on the Genetics of Alcoholism (COGA) data of chromosome 1 for complete analysis. Convergence to a setting producing a local maximum required the evaluation of over 100 settings (for a time budget of 800 minutes on a Pentium II 300 MHz PC). Two notable local maxima were detected, suggesting the need for a more extensive search before claiming that a global maximum had been found. The orthogonal Latin hypercube design was the best strategy for finding areas that produced high lod scores with small numbers of evaluations. Numerical optimization starting from a region producing high lod scores was the strategy that found the highest maximum observed.
Quantum coherence generating power, maximally abelian subalgebras, and Grassmannian geometry
Zanardi, Paolo; Campos Venuti, Lorenzo
2018-01-01
We establish a direct connection between the power of a unitary map in d-dimensions (d algebra). This set can be seen as a topologically non-trivial subset of the Grassmannian over linear operators. The natural distance over the Grassmannian induces a metric structure on Md, which quantifies the lack of commutativity between the pairs of subalgebras. Given a maximally abelian subalgebra, one can define, on physical grounds, an associated measure of quantum coherence. We show that the average quantum coherence generated by a unitary map acting on a uniform ensemble of quantum states in the algebra (the so-called coherence generating power of the map) is proportional to the distance between a pair of maximally abelian subalgebras in Md connected by the unitary transformation itself. By embedding the Grassmannian into a projective space, one can pull-back the standard Fubini-Study metric on Md and define in this way novel geometrical measures of quantum coherence generating power. We also briefly discuss the associated differential metric structures.
Theoretical maximal storage of hydrogen in zeolitic frameworks.
Vitillo, Jenny G; Ricchiardi, Gabriele; Spoto, Giuseppe; Zecchina, Adriano
2005-12-07
Physisorption and encapsulation of molecular hydrogen in tailored microporous materials are two of the options for hydrogen storage. Among these materials, zeolites have been widely investigated. In these materials, the attained storage capacities vary widely with structure and composition, leading to the expectation that materials with improved binding sites, together with lighter frameworks, may represent efficient storage materials. In this work, we address the problem of the determination of the maximum amount of molecular hydrogen which could, in principle, be stored in a given zeolitic framework, as limited by the size, structure and flexibility of its pore system. To this end, the progressive filling with H2 of 12 purely siliceous models of common zeolite frameworks has been simulated by means of classical molecular mechanics. By monitoring the variation of cell parameters upon progressive filling of the pores, conclusions are drawn regarding the maximum storage capacity of each framework and, more generally, on framework flexibility. The flexible non-pentasils RHO, FAU, KFI, LTA and CHA display the highest maximal capacities, ranging between 2.86-2.65 mass%, well below the targets set for automotive applications but still in an interesting range. The predicted maximal storage capacities correlate well with experimental results obtained at low temperature. The technique is easily extendable to any other microporous structure, and it can provide a method for the screening of hypothetical new materials for hydrogen storage applications.
Independent EEG sources are dipolar.
Directory of Open Access Journals (Sweden)
Arnaud Delorme
Full Text Available Independent component analysis (ICA and blind source separation (BSS methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR effected by each decomposition, and decomposition 'dipolarity' defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA; best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison.
Maximization techniques for oilfield development profits
International Nuclear Information System (INIS)
Lerche, I.
1999-01-01
In 1981 Nind provided a quantitative procedure for estimating the optimum number of development wells to emplace on an oilfield to maximize profit. Nind's treatment assumed that there was a steady selling price, that all wells were placed in production simultaneously, and that each well's production profile was identical and a simple exponential decline with time. This paper lifts these restrictions to allow for price fluctuations, variable with time emplacement of wells, and production rates that are more in line with actual production records than is a simple exponential decline curve. As a consequence, it is possible to design production rate strategies, correlated with price fluctuations, so as to maximize the present-day worth of a field. For price fluctuations that occur on a time-scale rapid compared to inflation rates it is appropriate to have production rates correlate directly with such price fluctuations. The same strategy does not apply for price fluctuations occurring on a time-scale long compared to inflation rates where, for small amplitudes in the price fluctuations, it is best to sell as much product as early as possible to overcome inflation factors, while for large amplitude fluctuations the best strategy is to sell product as early as possible but to do so mainly on price upswings. Examples are provided to show how these generalizations of Nind's (1981) formula change the complexion of oilfield development optimization. (author)
Systemic consultation and goal setting
Carr, Alan
1993-01-01
Over two decades of empirical research conducted within a positivist framework has shown that goal setting is a particularly useful method for influencing task performance in occupational and industrial contexts. The conditions under which goal setting is maximally effective are now clearly established. These include situations where there is a high level of acceptance and commitment, where goals are specific and challenging, where the task is relatively simple rather than ...
Harman, Nate
2016-01-01
We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.
Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo
2017-01-01
[Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furth...
Directory of Open Access Journals (Sweden)
Vasile DEDU
2012-08-01
Full Text Available In this paper we present the key aspects regarding central bank’s independence. Most economists consider that the factor which positively influences the efficiency of monetary policy measures is the high independence of the central bank. We determined that the National Bank of Romania (NBR has a high degree of independence. NBR has both goal and instrument independence. We also consider that the hike of NBR’s independence played an important role in the significant disinflation process, as headline inflation dropped inside the targeted band of 3% ± 1 percentage point recently.
Organizing Independent Student Work
Directory of Open Access Journals (Sweden)
Zhadyra T. Zhumasheva
2015-03-01
Full Text Available This article addresses issues in organizing independent student work. The author defines the term “independence”, discusses the concepts of independent learner work and independent learner work under the guidance of an instructor, proposes a classification of assignments to be done independently, and provides methodological recommendations as to the organization of independent student work. The article discusses the need for turning the student from a passive consumer of knowledge into an active creator of it, capable of formulating a problem, analyzing the ways of solving it, coming up with an optimum outcome, and proving its correctness. The preparation of highly qualified human resources is the primary condition for boosting Kazakhstan’s competitiveness. Independent student work is a means of fostering the professional competence of future specialists. The primary form of self-education is independent work.
International Nuclear Information System (INIS)
Fredriksson, Albin; Hårdemark, Björn; Forsgren, Anders
2015-01-01
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goals to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality
Tetrahedral meshing via maximal Poisson-disk sampling
Guo, Jianwei
2016-02-15
In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.
Sum-Rate Maximization of Coordinated Direct and Relay Systems
DEFF Research Database (Denmark)
Sun, Fan; Popovski, Petar; Thai, Chan
2012-01-01
Joint processing of multiple communication flows in wireless systems has given rise to a number of novel transmission techniques, notably the two-way relaying based on wireless network coding. Recently, a related set of techniques has emerged, termed coordinated direct and relay (CDR) transmissions......, where the constellation of traffic flows is more general than the two-way. Regardless of the actual traffic flows, in a CDR scheme the relay has a central role in managing the interference and boosting the overall system performance. In this paper we investigate the novel transmission modes, based...... on amplify-and-forward, that arise when the relay is equipped with multiple antennas and can use beamforming. We focus on one representative traffic type, with one uplink and one downlink users and consider the achievable sum-rate maximization relay beamforming. The beamforming criterion leads to a non...
Beyond "utilitarianism": maximizing the clinical impact of moral judgment research.
Rosas, Alejandro; Koenigs, Michael
2014-01-01
The use of hypothetical moral dilemmas--which pit utilitarian considerations of welfare maximization against emotionally aversive "personal" harms--has become a widespread approach for studying the neuropsychological correlates of moral judgment in healthy subjects, as well as in clinical populations with social, cognitive, and affective deficits. In this article, we propose that a refinement of the standard stimulus set could provide an opportunity to more precisely identify the psychological factors underlying performance on this task, and thereby enhance the utility of this paradigm for clinical research. To test this proposal, we performed a re-analysis of previously published moral judgment data from two clinical populations: neurological patients with prefrontal brain damage and psychopathic criminals. The results provide intriguing preliminary support for further development of this assessment paradigm.
Two-colorable graph states with maximal Schmidt measure
International Nuclear Information System (INIS)
Severini, Simone
2006-01-01
The Schmidt measure was introduced by Eisert and Briegel for quantifying the degree of entanglement of multipartite quantum systems [J. Eisert, H.-J. Briegel, Phys. Rev. A 64 (2001) 22306]. For two-colorable graph states, the Schmidt measure is related to the spectrum of the associated graph. We observe that almost all two-colorable graph states have maximal Schmidt measure and we construct specific examples. By making appeal to a result of Ehrenfeucht et al. [A. Ehrenfeucht, T. Harju, G. Rozenberg, Discrete Math. 278 (2004) 45], we point out that the graph operations called local complementation and switching form a transitive group acting on the set of all graph states of a given dimension
Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo
2017-11-01
[Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furthermore, multiple regression analysis was performed, with data regarding the body composition incorporated as another independent variable, in addition to the maximal isometric muscle strength. [Results] Through single regression analysis with the maximal isometric muscle strength as an independent variable, the following regression formula was created: 1RM (kg)=0.714 + 0.783 × maximal isometric muscle strength (kgf). On multiple regression analysis, only the total muscle mass was extracted. [Conclusion] A highly accurate regression formula to estimate 1RM was created based on both the maximal isometric muscle strength and body composition. Using a hand-held dynamometer and body composition analyzer, it was possible to measure these items in a short time, and obtain clinically useful results.
Shareholder, stakeholder-owner or broad stakeholder maximization
DEFF Research Database (Denmark)
Mygind, Niels
2004-01-01
With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating...... including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...
Maximizing Lumen Gain With Directional Atherectomy.
Stanley, Gregory A; Winscott, John G
2016-08-01
To describe the use of a low-pressure balloon inflation (LPBI) technique to delineate intraluminal plaque and guide directional atherectomy in order to maximize lumen gain and achieve procedure success. The technique is illustrated in a 77-year-old man with claudication who underwent superficial femoral artery revascularization using a HawkOne directional atherectomy catheter. A standard angioplasty balloon was inflated to 1 to 2 atm during live fluoroscopy to create a 3-dimensional "lumenogram" of the target lesion. Directional atherectomy was performed only where plaque impinged on the balloon at a specific fluoroscopic orientation. The results of the LPBI technique were corroborated with multimodality diagnostic imaging, including digital subtraction angiography, intravascular ultrasound, and intra-arterial pressure measurements. With the LPBI technique, directional atherectomy can routinely achieve <10% residual stenosis, as illustrated in this case, thereby broadly supporting a no-stent approach to lower extremity endovascular revascularization. © The Author(s) 2016.
Primordial two-component maximally symmetric inflation
Enqvist, K.; Nanopoulos, D. V.; Quirós, M.; Kounnas, C.
1985-12-01
We propose a two-component inflation model, based on maximally symmetric supergravity, where the scales of reheating and the inflation potential at the origin are decoupled. This is possible because of the second-order phase transition from SU(5) to SU(3)×SU(2)×U(1) that takes place when φ≅φcinflation at the global minimum, and leads to a reheating temperature TR≅(1015-1016) GeV. This makes it possible to generate baryon asymmetry in the conventional way without any conflict with experimental data on proton lifetime. The mass of the gravitinos is m3/2≅1012 GeV, thus avoiding the gravitino problem. Monopoles are diluted by residual inflation in the broken phase below the cosmological bounds if φcUSA.
Maximizing policy learning in international committees
DEFF Research Database (Denmark)
Nedergaard, Peter
2007-01-01
, this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......In the voluminous literature on the European Union's open method of coordination (OMC), no one has hitherto analysed on the basis of scholarly examination the question of what contributes to the learning processes in the OMC committees. On the basis of a questionnaire sent to all participants......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...
Lovelock black holes with maximally symmetric horizons
Energy Technology Data Exchange (ETDEWEB)
Maeda, Hideki; Willison, Steven; Ray, Sourya, E-mail: hideki@cecs.cl, E-mail: willison@cecs.cl, E-mail: ray@cecs.cl [Centro de Estudios CientIficos (CECs), Casilla 1469, Valdivia (Chile)
2011-08-21
We investigate some properties of n( {>=} 4)-dimensional spacetimes having symmetries corresponding to the isometries of an (n - 2)-dimensional maximally symmetric space in Lovelock gravity under the null or dominant energy condition. The well-posedness of the generalized Misner-Sharp quasi-local mass proposed in the past study is shown. Using this quasi-local mass, we clarify the basic properties of the dynamical black holes defined by a future outer trapping horizon under certain assumptions on the Lovelock coupling constants. The C{sup 2} vacuum solutions are classified into four types: (i) Schwarzschild-Tangherlini-type solution; (ii) Nariai-type solution; (iii) special degenerate vacuum solution; and (iv) exceptional vacuum solution. The conditions for the realization of the last two solutions are clarified. The Schwarzschild-Tangherlini-type solution is studied in detail. We prove the first law of black-hole thermodynamics and present the expressions for the heat capacity and the free energy.
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Directory of Open Access Journals (Sweden)
Paulo André da Conceição Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
Maximizing profitability in a hospital outpatient pharmacy.
Jorgenson, J A; Kilarski, J W; Malatestinic, W N; Rudy, T A
1989-07-01
This paper describes the strategies employed to increase the profitability of an existing ambulatory pharmacy operated by the hospital. Methods to generate new revenue including implementation of a home parenteral therapy program, a home enteral therapy program, a durable medical equipment service, and home care disposable sales are described. Programs to maximize existing revenue sources such as increasing the capture rate on discharge prescriptions, increasing "walk-in" prescription traffic and increasing HMO prescription volumes are discussed. A method utilized to reduce drug expenditures is also presented. By minimizing expenses and increasing the revenues for the ambulatory pharmacy operation, net profit increased from +26,000 to over +140,000 in one year.
Maximizing the benefits of a dewatering system
International Nuclear Information System (INIS)
Matthews, P.; Iverson, T.S.
1999-01-01
The use of dewatering systems in the mining, industrial sludge and sewage waste treatment industries is discussed, also describing some of the problems that have been encountered while using drilling fluid dewatering technology. The technology is an acceptable drilling waste handling alternative but it has had problems associated with recycled fluid incompatibility, high chemical costs and system inefficiencies. This paper discussed the following five action areas that can maximize the benefits and help reduce costs of a dewatering project: (1) co-ordinate all services, (2) choose equipment that fits the drilling program, (3) match the chemical treatment with the drilling fluid types, (4) determine recycled fluid compatibility requirements, and (5) determine the disposal requirements before project start-up. 2 refs., 5 figs
Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L
2015-05-01
This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides.
A Criterion to Identify Maximally Entangled Four-Qubit State
International Nuclear Information System (INIS)
Zha Xinwei; Song Haiyang; Feng Feng
2011-01-01
Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)
With age a lower individual breathing reserve is associated with a higher maximal heart rate.
Burtscher, Martin; Gatterer, Hannes; Faulhaber, Martin; Burtscher, Johannes
2018-01-01
Maximal heart rate (HRmax) is linearly declining with increasing age. Regular exercise training is supposed to partly prevent this decline, whereas sex and habitual physical activity do not. High exercise capacity is associated with a high cardiac output (HR x stroke volume) and high ventilatory requirements. Due to the close cardiorespiratory coupling, we hypothesized that the individual ventilatory response to maximal exercise might be associated with the age-related HRmax. Retrospective analyses have been conducted on the results of 129 consecutively performed routine cardiopulmonary exercise tests. The study sample comprised healthy subjects of both sexes of a broad range of age (20-86 years). Maximal values of power output, minute ventilation, oxygen uptake and heart rate were assessed by the use of incremental cycle spiroergometry. Linear multivariate regression analysis revealed that in addition to age the individual breathing reserve at maximal exercise was independently predictive for HRmax. A lower breathing reserve due to a high ventilatory demand and/or a low ventilatory capacity, which is more pronounced at a higher age, was associated with higher HRmax. Age explained the observed variance in HRmax by 72% and was improved to 83% when the variable "breathing reserve" was entered. The presented findings indicate an independent association between the breathing reserve at maximal exercise and maximal heart rate, i.e. a low individual breathing reserve is associated with a higher age-related HRmax. A deeper understanding of this association has to be investigated in a more physiological scenario. Copyright © 2017 Elsevier B.V. All rights reserved.
Innovative Conference Curriculum: Maximizing Learning and Professionalism
Hyland, Nancy; Kranzow, Jeannine
2012-01-01
This action research study evaluated the potential of an innovative curriculum to move 73 graduate students toward professional development. The curriculum was grounded in the professional conference and utilized the motivation and expertise of conference presenters. This innovation required students to be more independent, act as a critical…
Seizures and Teens: Maximizing Health and Safety
Sundstrom, Diane
2007-01-01
As parents and caregivers, their job is to help their children become happy, healthy, and productive members of society. They try to balance the desire to protect their children with their need to become independent young adults. This can be a struggle for parents of teens with seizures, since there are so many challenges they may face. Teenagers…
Perfect independent sets with respect to infinitely many relations
Czech Academy of Sciences Publication Activity Database
Doležal, Martin; Kubiś, Wieslaw
2016-01-01
Roč. 55, č. 7 (2016), s. 847-856 ISSN 0933-5846 R&D Projects: GA ČR(CZ) GA14-07880S Institutional support: RVO:67985840 Keywords : perfect clique * free subgroup * open relation Subject RIV: BA - General Mathematics Impact factor: 0.394, year: 2016 http://link.springer.com/article/10.1007%2Fs00153-016-0498-3
Independent and Dominating Sets in Wireless Communication Graphs
Nieberg, T.
2006-01-01
Wireless ad hoc networks are advancing rapidly, both in research and more and more into our everyday lives. Wireless sensor networks are a prime example of a new technology that has gained a lot of attention in the literature, and that is going to enhance the way we view and interact with the
The exact LPT-bound for maximizing the minimum completion time
Csirik, J.; Kellerer, H.; Woeginger, G.J.
1992-01-01
We consider the problem of assigning a set of jobs to a system of m identical processors in order to maximize the earliest processor completion time. It was known that the LPT-heuristic gives an approximation of worst case ratio at most 3/4. In this note we show that the exact worst case ratio of
The Effects of Rear-Wheel Camber on Maximal Effort Mobility Performance in Wheelchair Athletes
Mason, B.; van der Woude, L.; Tolfrey, K.; Goosey-Tolfrey, V.
This study examined the effect of rear-wheel camber on maximal effort wheelchair mobility performance. 14 highly trained wheelchair court sport athletes performed a battery of field tests in 4 standardised camber settings (15°, 18°, 20°, 24°) with performance analysed using a velocometer. 20 m
Tissue P Systems With Channel States Working in the Flat Maximally Parallel Way.
Song, Bosheng; Perez-Jimenez, Mario J; Paun, Gheorghe; Pan, Linqiang
2016-10-01
Tissue P systems with channel states are a class of bio-inspired parallel computational models, where rules are used in a sequential manner (on each channel, at most one rule can be used at each step). In this work, tissue P systems with channel states working in a flat maximally parallel way are considered, where at each step, on each channel, a maximal set of applicable rules that pass from a given state to a unique next state, is chosen and each rule in the set is applied once. The computational power of such P systems is investigated. Specifically, it is proved that tissue P systems with channel states and antiport rules of length two are able to compute Parikh sets of finite languages, and such P systems with one cell and noncooperative symport rules can compute at least all Parikh sets of matrix languages. Some Turing universality results are also provided. Moreover, the NP-complete problem SAT is solved by tissue P systems with channel states, cell division and noncooperative symport rules working in the flat maximally parallel way; nevertheless, if channel states are not used, then such P systems working in the flat maximally parallel way can solve only tractable problems. These results show that channel states provide a frontier of tractability between efficiency and non-efficiency in the framework of tissue P systems with cell division (assuming P ≠ NP ).
Using complete measurement statistics for optimal device-independent randomness evaluation
International Nuclear Information System (INIS)
Nieto-Silleras, O; Pironio, S; Silman, J
2014-01-01
The majority of recent works investigating the link between non-locality and randomness, e.g. in the context of device-independent cryptography, do so with respect to some specific Bell inequality, usually the CHSH inequality. However, the joint probabilities characterizing the measurement outcomes of a Bell test are richer than just the degree of violation of a single Bell inequality. In this work we show how to take this extra information into account in a systematic manner in order to optimally evaluate the randomness that can be certified from non-local correlations. We further show that taking into account the complete set of outcome probabilities is equivalent to optimizing over all possible Bell inequalities, thereby allowing us to determine the optimal Bell inequality for certifying the maximal amount of randomness from a given set of non-local correlations. (paper)
Automatic sets and Delone sets
International Nuclear Information System (INIS)
Barbe, A; Haeseler, F von
2004-01-01
Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples
Nuclear Safety Authority independence, progresses to be considered
International Nuclear Information System (INIS)
Delzangles, Hubert
2013-01-01
The Nuclear Safety Authority is an independent administrative body. Nevertheless, functional and organic independence from operators and government can have different degrees. Having a look on the actual context, where government holds a large part of the main nuclear French operators, independence has to be maximal in order to avoid any conflict of interest that could attempt to nuclear safety. In a global point of view, it is possible to think about the risks or the benefits of the institutionalized cooperation between national regulators on the necessary independence of the Nuclear Safety Authority
Accounting for Independent Schools.
Sonenstein, Burton
The diversity of independent schools in size, function, and mode of operation has resulted in a considerable variety of accounting principles and practices. This lack of uniformity has tended to make understanding, evaluation, and comparison of independent schools' financial statements a difficult and sometimes impossible task. This manual has…
Experimental Implementation of a Kochen-Specker Set of Quantum Tests
D'Ambrosio, Vincenzo; Herbauts, Isabelle; Amselem, Elias; Nagali, Eleonora; Bourennane, Mohamed; Sciarrino, Fabio; Cabello, Adán
2013-01-01
The conflict between classical and quantum physics can be identified through a series of yes-no tests on quantum systems, without it being necessary that these systems be in special quantum states. Kochen-Specker (KS) sets of yes-no tests have this property and provide a quantum-versus-classical advantage that is free of the initialization problem that affects some quantum computers. Here, we report the first experimental implementation of a complete KS set that consists of 18 yes-no tests on four-dimensional quantum systems and show how to use the KS set to obtain a state-independent quantum advantage. We first demonstrate the unique power of this KS set for solving a task while avoiding the problem of state initialization. Such a demonstration is done by showing that, for 28 different quantum states encoded in the orbital-angular-momentum and polarization degrees of freedom of single photons, the KS set provides an impossible-to-beat solution. In a second experiment, we generate maximally contextual quantum correlations by performing compatible sequential measurements of the polarization and path of single photons. In this case, state independence is demonstrated for 15 different initial states. Maximum contextuality and state independence follow from the fact that the sequences of measurements project any initial quantum state onto one of the KS set’s eigenstates. Our results show that KS sets can be used for quantum-information processing and quantum computation and pave the way for future developments.
A Note of Caution on Maximizing Entropy
Directory of Open Access Journals (Sweden)
Richard E. Neapolitan
2014-07-01
Full Text Available The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples.
Optimal topologies for maximizing network transmission capacity
Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.
2018-04-01
It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.
New features of the maximal abelian projection
International Nuclear Information System (INIS)
Bornyakov, V.G.; Polikarpov, M.I.; Syritsyn, S.N.; Schierholz, G.; Suzuki, T.
2005-12-01
After fixing the Maximal Abelian gauge in SU(2) lattice gauge theory we decompose the nonabelian gauge field into the so called monopole field and the modified nonabelian field with monopoles removed. We then calculate respective static potentials and find that the potential due to the modified nonabelian field is nonconfining while, as is well known, the monopole field potential is linear. Furthermore, we show that the sum of these potentials approximates the nonabelian static potential with 5% or higher precision at all distances considered. We conclude that at large distances the monopole field potential describes the classical energy of the hadronic string while the modified nonabelian field potential describes the string fluctuations. Similar decomposition was observed to work for the adjoint static potential. A check was also made of the center projection in the direct center gauge. Two static potentials, determined by projected Z 2 and by modified nonabelian field without Z 2 component were calculated. It was found that their sum is a substantially worse approximation of the SU(2) static potential than that found in the monopole case. It is further demonstrated that similar decomposition can be made for the flux tube action/energy density. (orig.)
International Nuclear Information System (INIS)
Romano, Raffaele; Loock, Peter van
2010-01-01
Quantum teleportation enables deterministic and faithful transmission of quantum states, provided a maximally entangled state is preshared between sender and receiver, and a one-way classical channel is available. Here, we prove that these resources are not only sufficient, but also necessary, for deterministically and faithfully sending quantum states through any fixed noisy channel of maximal rank, when a single use of the cannel is admitted. In other words, for this family of channels, there are no other protocols, based on different (and possibly cheaper) sets of resources, capable of replacing quantum teleportation.
Short Run Profit Maximization in a Convex Analysis Framework
Directory of Open Access Journals (Sweden)
Ilko Vrankic
2017-03-01
Full Text Available In this article we analyse the short run profit maximization problem in a convex analysis framework. The goal is to apply the results of convex analysis due to unique structure of microeconomic phenomena on the known short run profit maximization problem where the results from convex analysis are deductively applied. In the primal optimization model the technology in the short run is represented by the short run production function and the normalized profit function, which expresses profit in the output units, is derived. In this approach the choice variable is the labour quantity. Alternatively, technology is represented by the real variable cost function, where costs are expressed in the labour units, and the normalized profit function is derived, this time expressing profit in the labour units. The choice variable in this approach is the quantity of production. The emphasis in these two perspectives of the primal approach is given to the first order necessary conditions of both models which are the consequence of enveloping the closed convex set describing technology with its tangents. The dual model includes starting from the normalized profit function and recovering the production function, and alternatively the real variable cost function. In the first perspective of the dual approach the choice variable is the real wage, and in the second it is the real product price expressed in the labour units. It is shown that the change of variables into parameters and parameters into variables leads to both optimization models which give the same system of labour demand and product supply functions and their inverses. By deductively applying the results of convex analysis the comparative statics results are derived describing the firm's behaviour in the short run.
Environmental Influences on Independent Collaborative Play
Mawson, Brent
2010-01-01
Data from two qualitative research projects indicated a relationship between the type of early childhood setting and children's independent collaborative play. The first research project involved 22 three and four-year-old children in a daylong setting and 47 children four-year-old children in a sessional kindergarten. The second project involved…
Probabilistic conditional independence structures
Studeny, Milan
2005-01-01
Probabilistic Conditional Independence Structures provides the mathematical description of probabilistic conditional independence structures; the author uses non-graphical methods of their description, and takes an algebraic approach.The monograph presents the methods of structural imsets and supermodular functions, and deals with independence implication and equivalence of structural imsets.Motivation, mathematical foundations and areas of application are included, and a rough overview of graphical methods is also given.In particular, the author has been careful to use suitable terminology, and presents the work so that it will be understood by both statisticians, and by researchers in artificial intelligence.The necessary elementary mathematical notions are recalled in an appendix.
Peake, Jonathan M; Nosaka, Kazunori; Muthalib, Makii; Suzuki, Katsuhiko
2006-01-01
We compared changes in markers of muscle damage and systemic inflammation after submaximal and maximal lengthening muscle contractions of the elbow flexors. Using a cross-over design, 10 healthy young men not involved in resistance training completed a submaximal trial (10 sets of 60 lengthening contractions at 10% maximum isometric strength, 1 min rest between sets), followed by a maximal trial (10 sets of three lengthening contractions at 100% maximum isometric strength, 3 min rest between sets). Lengthening contractions were performed on an isokinetic dynamometer. Opposite arms were used for the submaximal and maximal trials, and the trials were separated by a minimum of two weeks. Blood was sampled before, immediately after, 1 h, 3 h, and 1-4 d after each trial. Total leukocyte and neutrophil numbers, and the serum concentration of soluble tumor necrosis factor-alpha receptor 1 were elevated after both trials (P < 0.01), but there were no differences between the trials. Serum IL-6 concentration was elevated 3 h after the submaximal contractions (P < 0.01). The concentrations of serum tumor necrosis factor-alpha, IL-1 receptor antagonist, IL-10, granulocyte-colony stimulating factor and plasma C-reactive protein remained unchanged following both trials. Maximum isometric strength and range of motion decreased significantly (P < 0.001) after both trials, and were lower from 1-4 days after the maximal contractions compared to the submaximal contractions. Plasma myoglobin concentration and creatine kinase activity, muscle soreness and upper arm circumference all increased after both trials (P < 0.01), but were not significantly different between the trials. Therefore, there were no differences in markers of systemic inflammation, despite evidence of greater muscle damage following maximal versus submaximal lengthening contractions of the elbow flexors.
Value maximizing maintenance policies under general repair
International Nuclear Information System (INIS)
Marais, Karen B.
2013-01-01
One class of maintenance optimization problems considers the notion of general repair maintenance policies where systems are repaired or replaced on failure. In each case the optimality is based on minimizing the total maintenance cost of the system. These cost-centric optimizations ignore the value dimension of maintenance and can lead to maintenance strategies that do not maximize system value. This paper applies these ideas to the general repair optimization problem using a semi-Markov decision process, discounted cash flow techniques, and dynamic programming to identify the value-optimal actions for any given time and system condition. The impact of several parameters on maintenance strategy, such as operating cost and revenue, system failure characteristics, repair and replacement costs, and the planning time horizon, is explored. This approach provides a quantitative basis on which to base maintenance strategy decisions that contribute to system value. These decisions are different from those suggested by traditional cost-based approaches. The results show (1) how the optimal action for a given time and condition changes as replacement and repair costs change, and identifies the point at which these costs become too high for profitable system operation; (2) that for shorter planning horizons it is better to repair, since there is no time to reap the benefits of increased operating profit and reliability; (3) how the value-optimal maintenance policy is affected by the system's failure characteristics, and hence whether it is worthwhile to invest in higher reliability; and (4) the impact of the repair level on the optimal maintenance policy. -- Highlights: •Provides a quantitative basis for maintenance strategy decisions that contribute to system value. •Shows how the optimal action for a given condition changes as replacement and repair costs change. •Shows how the optimal policy is affected by the system's failure characteristics. •Shows when it is
Independence of irrelevant alternatives and revealed group preferences
Wakker, P.P.; Peters, H.J.M.; Ichiishi, A.N.; Neyman, A.; Tauman, Y.
1990-01-01
It is shown that a Pareto optimal and continuous single-valued choice function defined on the compact convex subsets of the positive orthant of the plane maximizes a real-valued function if and only if it satisfies the independence of irrelevant alternatives condition. Further, this real-valued
Maximal Preference Utilitarianism as an Educational Aspiration
Stables, Andrew
2016-01-01
This paper attempts to square libertarian principles with the reality of formal education by asking how far we should and can allow people to do as they wish in educational settings. The major focus is on children in schools, as the concept "childhood" "ipso facto" implies restrictions on doing as one wishes, and schools as…
Matching, Demand, Maximization, and Consumer Choice
Wells, Victoria K.; Foxall, Gordon R.
2013-01-01
The use of behavioral economics and behavioral psychology in consumer choice has been limited. The current study extends the study of consumer behavior analysis, a synthesis between behavioral psychology, economics, and marketing, to a larger data set. This article presents the current work and results from the early analysis of the data. We…
Maximing Learning Strategies to Promote Learner Autonomy
Directory of Open Access Journals (Sweden)
Junaidi Mistar
2001-01-01
Full Text Available Learning a new language is ultimately to be able to communicate with it. Encouraging a sense of responsibility on the part of the learners is crucial for training them to be proficient communicators. As such, understanding the strategies that they employ in acquiring the language skill is important to come to ideas of how to promote learner autonomy. Research recently conducted with three different groups of learners of English at the tertiary education level in Malang indicated that they used metacognitive and social startegies at a high frequency, while memory, cognitive, conpensation, and affective strategies were exercised at a medium frewuency. This finding implies that the learners have acquired some degrees of autonomy because metacognive strategies requires them to independently make plans for their learning activities as well as evaluate the progress, and social strategies requires them to independently enhance communicative interactions with other people. Further actions are then to be taken increase their learning autonomy, that is by intensifying the practice of use of the other four strategy categories, which are not yet applied intensively.
Maximizing protection from use of oral cholera vaccines in developing country settings
Desai, Sachin N; Cravioto, Alejandro; Sur, Dipika; Kanungo, Suman
2014-01-01
When oral vaccines are administered to children in lower- and middle-income countries, they do not induce the same immune responses as they do in developed countries. Although not completely understood, reasons for this finding include maternal antibody interference, mucosal pathology secondary to infection, malnutrition, enteropathy, and previous exposure to the organism (or related organisms). Young children experience a high burden of cholera infection, which can lead to severe acute dehydrating diarrhea and substantial mortality and morbidity. Oral cholera vaccines show variations in their duration of protection and efficacy between children and adults. Evaluating innate and memory immune response is necessary to understand V. cholerae immunity and to improve current cholera vaccine candidates, especially in young children. Further research on the benefits of supplementary interventions and delivery schedules may also improve immunization strategies. PMID:24861554
POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN
Directory of Open Access Journals (Sweden)
Sang Ayu Isnu Maharani
2017-06-01
Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim
Kettlebell swing training improves maximal and explosive strength.
Lake, Jason P; Lauder, Mike A
2012-08-01
The aim of this study was to establish the effect that kettlebell swing (KB) training had on measures of maximum (half squat-HS-1 repetition maximum [1RM]) and explosive (vertical jump height-VJH) strength. To put these effects into context, they were compared with the effects of jump squat power training (JS-known to improve 1RM and VJH). Twenty-one healthy men (age = 18-27 years, body mass = 72.58 ± 12.87 kg) who could perform a proficient HS were tested for their HS 1RM and VJH pre- and post-training. Subjects were randomly assigned to either a KB or JS training group after HS 1RM testing and trained twice a week. The KB group performed 12-minute bouts of KB exercise (12 rounds of 30-second exercise, 30-second rest with 12 kg if 70 kg). The JS group performed at least 4 sets of 3 JS with the load that maximized peak power-Training volume was altered to accommodate different training loads and ranged from 4 sets of 3 with the heaviest load (60% 1RM) to 8 sets of 6 with the lightest load (0% 1RM). Maximum strength improved by 9.8% (HS 1RM: 165-181% body mass, p < 0.001) after the training intervention, and post hoc analysis revealed that there was no significant difference between the effect of KB and JS training (p = 0.56). Explosive strength improved by 19.8% (VJH: 20.6-24.3 cm) after the training intervention, and post hoc analysis revealed that the type of training did not significantly affect this either (p = 0.38). The results of this study clearly demonstrate that 6 weeks of biweekly KB training provides a stimulus that is sufficient to increase both maximum and explosive strength offering a useful alternative to strength and conditioning professionals seeking variety for their athletes.
Maximizing gain in high-throughput screening using conformal prediction.
Svensson, Fredrik; Afzal, Avid M; Norinder, Ulf; Bender, Andreas
2018-02-21
Iterative screening has emerged as a promising approach to increase the efficiency of screening campaigns compared to traditional high throughput approaches. By learning from a subset of the compound library, inferences on what compounds to screen next can be made by predictive models, resulting in more efficient screening. One way to evaluate screening is to consider the cost of screening compared to the gain associated with finding an active compound. In this work, we introduce a conformal predictor coupled with a gain-cost function with the aim to maximise gain in iterative screening. Using this setup we were able to show that by evaluating the predictions on the training data, very accurate predictions on what settings will produce the highest gain on the test data can be made. We evaluate the approach on 12 bioactivity datasets from PubChem training the models using 20% of the data. Depending on the settings of the gain-cost function, the settings generating the maximum gain were accurately identified in 8-10 out of the 12 datasets. Broadly, our approach can predict what strategy generates the highest gain based on the results of the cost-gain evaluation: to screen the compounds predicted to be active, to screen all the remaining data, or not to screen any additional compounds. When the algorithm indicates that the predicted active compounds should be screened, our approach also indicates what confidence level to apply in order to maximize gain. Hence, our approach facilitates decision-making and allocation of the resources where they deliver the most value by indicating in advance the likely outcome of a screening campaign.
Bois, John P; Geske, Jeffrey B; Foley, Thomas A; Ommen, Steve R; Pellikka, Patricia A
2017-02-15
Left ventricular (LV) wall thickness is a prognostic marker in hypertrophic cardiomyopathy (HC). LV wall thickness ≥30 mm (massive hypertrophy) is independently associated with sudden cardiac death. Presence of massive hypertrophy is used to guide decision making for cardiac defibrillator implantation. We sought to determine whether measurements of maximal LV wall thickness differ between cardiac magnetic resonance imaging (MRI) and transthoracic echocardiography (TTE). Consecutive patients were studied who had HC without previous septal ablation or myectomy and underwent both cardiac MRI and TTE at a single tertiary referral center. Reported maximal LV wall thickness was compared between the imaging techniques. Patients with ≥1 technique reporting massive hypertrophy received subset analysis. In total, 618 patients were evaluated from January 1, 2003, to December 21, 2012 (mean [SD] age, 53 [15] years; 381 men [62%]). In 75 patients (12%), reported maximal LV wall thickness was identical between MRI and TTE. Median difference in reported maximal LV wall thickness between the techniques was 3 mm (maximum difference, 17 mm). Of the 63 patients with ≥1 technique measuring maximal LV wall thickness ≥30 mm, 44 patients (70%) had discrepant classification regarding massive hypertrophy. MRI identified 52 patients (83%) with massive hypertrophy; TTE, 30 patients (48%). Although guidelines recommend MRI or TTE imaging to assess cardiac anatomy in HC, this study shows discrepancy between the techniques for maximal reported LV wall thickness assessment. In conclusion, because this measure clinically affects prognosis and therapeutic decision making, efforts to resolve these discrepancies are critical. Copyright © 2016 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Tzong-Yi Lee
Full Text Available S-nitrosylation, the covalent attachment of a nitric oxide to (NO the sulfur atom of cysteine, is a selective and reversible protein post-translational modification (PTM that regulates protein activity, localization, and stability. Despite its implication in the regulation of protein functions and cell signaling, the substrate specificity of cysteine S-nitrosylation remains unknown. Based on a total of 586 experimentally identified S-nitrosylation sites from SNAP/L-cysteine-stimulated mouse endothelial cells, this work presents an informatics investigation on S-nitrosylation sites including structural factors such as the flanking amino acids composition, the accessible surface area (ASA and physicochemical properties, i.e. positive charge and side chain interaction parameter. Due to the difficulty to obtain the conserved motifs by conventional motif analysis, maximal dependence decomposition (MDD has been applied to obtain statistically significant conserved motifs. Support vector machine (SVM is applied to generate predictive model for each MDD-clustered motif. According to five-fold cross-validation, the MDD-clustered SVMs could achieve an accuracy of 0.902, and provides a promising performance in an independent test set. The effectiveness of the model was demonstrated on the correct identification of previously reported S-nitrosylation sites of Bos taurus dimethylarginine dimethylaminohydrolase 1 (DDAH1 and human hemoglobin subunit beta (HBB. Finally, the MDD-clustered model was adopted to construct an effective web-based tool, named SNOSite (http://csb.cse.yzu.edu.tw/SNOSite/, for identifying S-nitrosylation sites on the uncharacterized protein sequences.
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
A subset S of is said to be full, if S is a maximal good set in 1S × 2S ืทททื nS. ([3], p. 183). Two points .... assume there is a p for which |np|= |mp|. Then. ∑k j=1 nj xj = 0 ..... M G Nadkarni for suggesting the problems and for encouragement and ...
DEFF Research Database (Denmark)
Thude, Bettina Ravnborg; Stenager, Egon; von Plessen, Christian
2018-01-01
. Findings: The study found that the leadership set-up did not have any clear influence on interdisciplinary cooperation, as all wards had a high degree of interdisciplinary cooperation independent of which leadership set-up they had. Instead, the authors found a relation between leadership set-up and leader...... could influence legitimacy. Originality/value: The study shows that leadership set-up is not the predominant factor that creates interdisciplinary cooperation; but rather, leader legitimacy also should be considered. Additionally, the study shows that leader legitimacy can be difficult to establish...... and that it cannot be taken for granted. This is something chief executive officers should bear in mind when they plan and implement new leadership structures. Therefore, it would also be useful to look more closely at how to achieve legitimacy in cases where the leader is from a different profession to the staff....
Maximizers versus satisficers: Decision-making styles, competence, and outcomes
Andrew M. Parker; Wändi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al.\\ (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decision...
International Nuclear Information System (INIS)
Wetterich, C.
1999-01-01
The naturalness of maximal mixing between myon- and tau-neutrinos is investigated. A spontaneously broken nonabelian generation symmetry can explain a small parameter which governs the deviation from maximal mixing. In many cases all three neutrino masses are almost degenerate. Maximal ν μ -ν τ -mixing suggests that the leading contribution to the light neutrino masses arises from the expectation value of a heavy weak triplet rather than from the seesaw mechanism. In this scenario the deviation from maximal mixing is predicted to be less than about 1%. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
On the way towards a generalized entropy maximization procedure
International Nuclear Information System (INIS)
Bagci, G. Baris; Tirnakli, Ugur
2009-01-01
We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q element of (0,1] in contrast to the stationary distribution of the inverse power law obtained through the ordinary entropy maximization procedure. Another result of the generalized entropy maximization procedure is that one can naturally obtain all the possible stationary distributions associated with the Tsallis entropies by employing either ordinary or q-generalized Fourier transforms in the averaging procedure.
Violating Bell inequalities maximally for two d-dimensional systems
International Nuclear Information System (INIS)
Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin
2006-01-01
We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information
Analysis of elliptically polarized maximally entangled states for bell inequality tests
Martin, A.; Smirr, J.-L.; Kaiser, F.; Diamanti, E.; Issautier, A.; Alibart, O.; Frey, R.; Zaquine, I.; Tanzilli, S.
2012-06-01
When elliptically polarized maximally entangled states are considered, i.e., states having a non random phase factor between the two bipartite polarization components, the standard settings used for optimal violation of Bell inequalities are no longer adapted. One way to retrieve the maximal amount of violation is to compensate for this phase while keeping the standard Bell inequality analysis settings. We propose in this paper a general theoretical approach that allows determining and adjusting the phase of elliptically polarized maximally entangled states in order to optimize the violation of Bell inequalities. The formalism is also applied to several suggested experimental phase compensation schemes. In order to emphasize the simplicity and relevance of our approach, we also describe an experimental implementation using a standard Soleil-Babinet phase compensator. This device is employed to correct the phase that appears in the maximally entangled state generated from a type-II nonlinear photon-pair source after the photons are created and distributed over fiber channels.
Independent technical review, handbook
International Nuclear Information System (INIS)
1994-02-01
Purpose Provide an independent engineering review of the major projects being funded by the Department of Energy, Office of Environmental Restoration and Waste Management. The independent engineering review will address questions of whether the engineering practice is sufficiently developed to a point where a major project can be executed without significant technical problems. The independent review will focus on questions related to: (1) Adequacy of development of the technical base of understanding; (2) Status of development and availability of technology among the various alternatives; (3) Status and availability of the industrial infrastructure to support project design, equipment fabrication, facility construction, and process and program/project operation; (4) Adequacy of the design effort to provide a sound foundation to support execution of project; (5) Ability of the organization to fully integrate the system, and direct, manage, and control the execution of a complex major project
Independent technical review, handbook
Energy Technology Data Exchange (ETDEWEB)
1994-02-01
Purpose Provide an independent engineering review of the major projects being funded by the Department of Energy, Office of Environmental Restoration and Waste Management. The independent engineering review will address questions of whether the engineering practice is sufficiently developed to a point where a major project can be executed without significant technical problems. The independent review will focus on questions related to: (1) Adequacy of development of the technical base of understanding; (2) Status of development and availability of technology among the various alternatives; (3) Status and availability of the industrial infrastructure to support project design, equipment fabrication, facility construction, and process and program/project operation; (4) Adequacy of the design effort to provide a sound foundation to support execution of project; (5) Ability of the organization to fully integrate the system, and direct, manage, and control the execution of a complex major project.
Evaluation of anti-hyperglycemic effect of Actinidia kolomikta (Maxim. etRur.) Maxim. root extract.
Hu, Xuansheng; Cheng, Delin; Wang, Linbo; Li, Shuhong; Wang, Yuepeng; Li, Kejuan; Yang, Yingnan; Zhang, Zhenya
2015-05-01
This study aimed to evaluate the anti-hyperglycemic effect of ethanol extract from Actinidia kolomikta (Maxim. etRur.) Maxim. root (AKE).An in vitro evaluation was performed by using rat intestinal α-glucosidase (maltase and sucrase), the key enzymes linked with type 2 diabetes. And an in vivo evaluation was also performed by loading maltose, sucrose, glucose to normal rats. As a result, AKE showed concentration-dependent inhibition effects on rat intestinal maltase and rat intestinal sucrase with IC(50) values of 1.83 and 1.03mg/mL, respectively. In normal rats, after loaded with maltose, sucrose and glucose, administration of AKE significantly reduced postprandial hyperglycemia, which is similar to acarbose used as an anti-diabetic drug. High contents of total phenolics (80.49 ± 0.05mg GAE/g extract) and total flavonoids (430.69 ± 0.91mg RE/g extract) were detected in AKE. In conclusion, AKE possessed anti-hyperglycemic effects and the possible mechanisms were associated with its inhibition on α-glucosidase and the improvement on insulin release and/or insulin sensitivity as well. The anti-hyperglycemic activity possessed by AKE maybe attributable to its high contents of phenolic and flavonoid compounds.
Sakamoto, Akihiro; Naito, Hisashi; Chow, Chin Moi
2015-07-01
Hyperventilation, implemented during recovery of repeated maximal sprints, has been shown to attenuate performance decrement. This study evaluated the effects of hyperventilation, using strength exercises, on muscle torque output and EMG amplitude. Fifteen power-trained athletes underwent maximal isokinetic knee extensions consisting of 12 repetitions × 8 sets at 60°/s and 25 repetitions × 8 sets at 300°/s. The inter-set interval was 40 s for both speeds. For the control condition, subjects breathed spontaneously during the interval period. For the hyperventilation condition, subjects hyperventilated for 30 s before each exercise set (50 breaths/min, PETCO2: 20-25 mmHg). EMG was recorded from the vastus medialis and lateralis muscles to calculate the mean amplitude for each contraction. Hyperventilation increased blood pH by 0.065-0.081 and lowered PCO2 by 8.3-10.3 mmHg from the control values (P < 0.001). Peak torque declined with repetition and set numbers for both speeds (P < 0.001), but the declining patterns were similar between conditions. A significant, but small enhancement in peak torque was observed with hyperventilation at 60°/s during the initial repetition phase of the first (P = 0.032) and fourth sets (P = 0.040). EMG amplitude also declined with set number (P < 0.001) for both speeds and muscles, which was, however, not attenuated by hyperventilation. Despite a minor ergogenic effect in peak torque at 60°/s, hyperventilation was not effective in attenuating the decrement in torque output at 300°/s and decrement in EMG amplitude at both speeds during repeated sets of maximal isokinetic knee extensions.
Large margin image set representation and classification
Wang, Jim Jing-Yan
2014-07-06
In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.
Large margin image set representation and classification
Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin
2014-01-01
In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.
Kinetic theory in maximal-acceleration invariant phase space
International Nuclear Information System (INIS)
Brandt, H.E.
1989-01-01
A vanishing directional derivative of a scalar field along particle trajectories in maximal acceleration invariant phase space is identical in form to the ordinary covariant Vlasov equation in curved spacetime in the presence of both gravitational and nongravitational forces. A natural foundation is thereby provided for a covariant kinetic theory of particles in maximal-acceleration invariant phase space. (orig.)
IIB solutions with N>28 Killing spinors are maximally supersymmetric
International Nuclear Information System (INIS)
Gran, U.; Gutowski, J.; Papadopoulos, G.; Roest, D.
2007-01-01
We show that all IIB supergravity backgrounds which admit more than 28 Killing spinors are maximally supersymmetric. In particular, we find that for all N>28 backgrounds the supercovariant curvature vanishes, and that the quotients of maximally supersymmetric backgrounds either preserve all 32 or N<29 supersymmetries
Muscle mitochondrial capacity exceeds maximal oxygen delivery in humans
DEFF Research Database (Denmark)
Boushel, Robert Christopher; Gnaiger, Erich; Calbet, Jose A L
2011-01-01
Across a wide range of species and body mass a close matching exists between maximal conductive oxygen delivery and mitochondrial respiratory rate. In this study we investigated in humans how closely in-vivo maximal oxygen consumption (VO(2) max) is matched to state 3 muscle mitochondrial respira...
Pace's Maxims for Homegrown Library Projects. Coming Full Circle
Pace, Andrew K.
2005-01-01
This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…
DEFF Research Database (Denmark)
Könemann, Patrick
just contain a list of strings, one for each line, whereas the structure of models is defined by their meta models. There are tools available which are able to compute the diff between two models, e.g. RSA or EMF Compare. However, their diff is not model-independent, i.e. it refers to the models...
All Those Independent Variables.
Meacham, Merle L.
This paper presents a case study of a sixth grade remedial math class which illustrates the thesis that only the "experimental attitude," not the "experimental method," is appropriate in the classroom. The thesis is based on the fact that too many independent variables exist in a classroom situation to allow precise measurement. The case study…
Bayesian Independent Component Analysis
DEFF Research Database (Denmark)
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...
Independent safety organization
International Nuclear Information System (INIS)
Kato, W.Y.; Weinstock, E.V.; Carew, J.F.; Cerbone, R.J.; Guppy, J.G.; Hall, R.E.; Taylor, J.H.
1985-01-01
Brookhaven National Laboratory has conducted a study on the need and feasibility of an independent organization to investigate significant safety events for the Office for Analysis and Evaluation of Operational Data, USNRC. The study consists of three parts: the need for an independent organization to investigate significant safety events, alternative organizations to conduct investigations, and legislative requirements. The determination of need was investigated by reviewing current NRC investigation practices, comparing aviation and nuclear industry practices, and interviewing a spectrum of representatives from the nuclear industry, the regulatory agency, and the public sector. The advantages and disadvantages of alternative independent organizations were studied, namely, an Office of Nuclear Safety headed by a director reporting to the Executive Director for Operations (EDO) of NRC; an Office of Nuclear Safety headed by a director reporting to the NRC Commissioners; a multi-member NTSB-type Nuclear Safety Board independent of the NRC. The costs associated with operating a Nuclear Safety Board were also included in the study. The legislative requirements, both new authority and changes to the existing NRC legislative authority, were studied. 134 references
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
Douglas, Julie A.; Sandefur, Conner I.
2008-01-01
In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a ge...
Maximizing biomarker discovery by minimizing gene signatures
Directory of Open Access Journals (Sweden)
Chang Chang
2011-12-01
Full Text Available Abstract Background The use of gene signatures can potentially be of considerable value in the field of clinical diagnosis. However, gene signatures defined with different methods can be quite various even when applied the same disease and the same endpoint. Previous studies have shown that the correct selection of subsets of genes from microarray data is key for the accurate classification of disease phenotypes, and a number of methods have been proposed for the purpose. However, these methods refine the subsets by only considering each single feature, and they do not confirm the association between the genes identified in each gene signature and the phenotype of the disease. We proposed an innovative new method termed Minimize Feature's Size (MFS based on multiple level similarity analyses and association between the genes and disease for breast cancer endpoints by comparing classifier models generated from the second phase of MicroArray Quality Control (MAQC-II, trying to develop effective meta-analysis strategies to transform the MAQC-II signatures into a robust and reliable set of biomarker for clinical applications. Results We analyzed the similarity of the multiple gene signatures in an endpoint and between the two endpoints of breast cancer at probe and gene levels, the results indicate that disease-related genes can be preferably selected as the components of gene signature, and that the gene signatures for the two endpoints could be interchangeable. The minimized signatures were built at probe level by using MFS for each endpoint. By applying the approach, we generated a much smaller set of gene signature with the similar predictive power compared with those gene signatures from MAQC-II. Conclusions Our results indicate that gene signatures of both large and small sizes could perform equally well in clinical applications. Besides, consistency and biological significances can be detected among different gene signatures, reflecting the
The generalized scheme-independent Crewther relation in QCD
Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; Brodsky, Stanley J.
2017-07-01
The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton-nucleon scattering times the Adler function, defined from the cross section for electron-positron annihilation into hadrons, has no pQCD radiative corrections. The ;Generalized Crewther Relation; relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp) at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient CBjp-1 have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, αˆd (Q) =∑i≥1 αˆg1 i (Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Similar
Optimization of Second Fault Detection Thresholds to Maximize Mission POS
Anzalone, Evan
2018-01-01
In order to support manned spaceflight safety requirements, the Space Launch System (SLS) has defined program-level requirements for key systems to ensure successful operation under single fault conditions. To accommodate this with regards to Navigation, the SLS utilizes an internally redundant Inertial Navigation System (INS) with built-in capability to detect, isolate, and recover from first failure conditions and still maintain adherence to performance requirements. The unit utilizes multiple hardware- and software-level techniques to enable detection, isolation, and recovery from these events in terms of its built-in Fault Detection, Isolation, and Recovery (FDIR) algorithms. Successful operation is defined in terms of sufficient navigation accuracy at insertion while operating under worst case single sensor outages (gyroscope and accelerometer faults at launch). In addition to first fault detection and recovery, the SLS program has also levied requirements relating to the capability of the INS to detect a second fault, tracking any unacceptable uncertainty in knowledge of the vehicle's state. This detection functionality is required in order to feed abort analysis and ensure crew safety. Increases in navigation state error and sensor faults can drive the vehicle outside of its operational as-designed environments and outside of its performance envelope causing loss of mission, or worse, loss of crew. The criteria for operation under second faults allows for a larger set of achievable missions in terms of potential fault conditions, due to the INS operating at the edge of its capability. As this performance is defined and controlled at the vehicle level, it allows for the use of system level margins to increase probability of mission success on the operational edges of the design space. Due to the implications of the vehicle response to abort conditions (such as a potentially failed INS), it is important to consider a wide range of failure scenarios in terms of
Maximal stochastic transport in the Lorenz equations
Energy Technology Data Exchange (ETDEWEB)
Agarwal, Sahil, E-mail: sahil.agarwal@yale.edu [Program in Applied Mathematics, Yale University, New Haven (United States); Wettlaufer, J.S., E-mail: john.wettlaufer@yale.edu [Program in Applied Mathematics, Yale University, New Haven (United States); Departments of Geology & Geophysics, Mathematics and Physics, Yale University, New Haven (United States); Mathematical Institute, University of Oxford, Oxford (United Kingdom); Nordita, Royal Institute of Technology and Stockholm University, Stockholm (Sweden)
2016-01-08
We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh–Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.
Maximizing the Adjacent Possible in Automata Chemistries.
Hickinbotham, Simon; Clark, Edward; Nellis, Adam; Stepney, Susan; Clarke, Tim; Young, Peter
2016-01-01
Automata chemistries are good vehicles for experimentation in open-ended evolution, but they are by necessity complex systems whose low-level properties require careful design. To aid the process of designing automata chemistries, we develop an abstract model that classifies the features of a chemistry from a physical (bottom up) perspective and from a biological (top down) perspective. There are two levels: things that can evolve, and things that cannot. We equate the evolving level with biology and the non-evolving level with physics. We design our initial organisms in the biology, so they can evolve. We design the physics to facilitate evolvable biologies. This architecture leads to a set of design principles that should be observed when creating an instantiation of the architecture. These principles are Everything Evolves, Everything's Soft, and Everything Dies. To evaluate these ideas, we present experiments in the recently developed Stringmol automata chemistry. We examine the properties of Stringmol with respect to the principles, and so demonstrate the usefulness of the principles in designing automata chemistries.
Quantum independent increment processes
Franz, Uwe
2005-01-01
This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.
Quantum independent increment processes
Franz, Uwe
2006-01-01
This is the second of two volumes containing the revised and completed notes of lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald in March, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present second volume contains the following lectures: "Random Walks on Finite Quantum Groups" by Uwe Franz and Rolf Gohm, "Quantum Markov Processes and Applications in Physics" by Burkhard Kümmerer, Classical and Free Infinite Divisibility and Lévy Processes" by Ole E. Barndorff-Nielsen, Steen Thorbjornsen, and "Lévy Processes on Quantum Groups and Dual Groups" by Uwe Franz.
Independent random sampling methods
Martino, Luca; Míguez, Joaquín
2018-01-01
This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...
International exploration by independents
International Nuclear Information System (INIS)
Bertagne, R.G.
1991-01-01
Recent industry trends indicate that the smaller US independents are looking at foreign exploration opportunities as one of the alternatives for growth in the new age of exploration. It is usually accepted that foreign finding costs per barrel are substantially lower than domestic because of the large reserve potential of international plays. To get involved overseas requires, however, an adaptation to different cultural, financial, legal, operational, and political conditions. Generally foreign exploration proceeds at a slower pace than domestic because concessions are granted by the government, or are explored in partnership with the national oil company. First, a mid- to long-term strategy, tailored to the goals and the financial capabilities of the company, must be prepared; it must be followed by an ongoing evaluation of quality prospects in various sedimentary basins, and a careful planning and conduct of the operations. To successfully explore overseas also requires the presence on the team of a minimum number of explorationists and engineers thoroughly familiar with the various exploratory and operational aspects of foreign work, having had a considerable amount of onsite experience in various geographical and climatic environments. Independents that are best suited for foreign expansion are those that have been financially successful domestically, and have a good discovery track record. When properly approached foreign exploration is well within the reach of smaller US independents and presents essentially no greater risk than domestic exploration; the reward, however, can be much larger and can catapult the company into the big leagues
International exploration by independent
International Nuclear Information System (INIS)
Bertragne, R.G.
1992-01-01
Recent industry trends indicate that the smaller U.S. independents are looking at foreign exploration opportunities as one of the alternatives for growth in the new age of exploration. Foreign finding costs per barrel usually are accepted to be substantially lower than domestic costs because of the large reserve potential of international plays. To get involved in overseas exploration, however, requires the explorationist to adapt to different cultural, financial, legal, operational, and political conditions. Generally, foreign exploration proceeds at a slower pace than domestic exploration because concessions are granted by a country's government, or are explored in partnership with a national oil company. First, the explorationist must prepare a mid- to long-term strategy, tailored to the goals and the financial capabilities of the company; next, is an ongoing evaluation of quality prospects in various sedimentary basins, and careful planning and conduct of the operations. To successfully explore overseas also requires the presence of a minimum number of explorationists and engineers thoroughly familiar with the various exploratory and operational aspects of foreign work. Ideally, these team members will have had a considerable amount of on-site experience in various countries and climates. Independents best suited for foreign expansion are those who have been financially successful in domestic exploration. When properly approached, foreign exploration is well within the reach of smaller U.S. independents, and presents essentially no greater risk than domestic exploration; however, the reward can be much larger and can catapult the company into the 'big leagues.'
Agent independent task planning
Davis, William S.
1990-01-01
Agent-Independent Planning is a technique that allows the construction of activity plans without regard to the agent that will perform them. Once generated, a plan is then validated and translated into instructions for a particular agent, whether a robot, crewmember, or software-based control system. Because Space Station Freedom (SSF) is planned for orbital operations for approximately thirty years, it will almost certainly experience numerous enhancements and upgrades, including upgrades in robotic manipulators. Agent-Independent Planning provides the capability to construct plans for SSF operations, independent of specific robotic systems, by combining techniques of object oriented modeling, nonlinear planning and temporal logic. Since a plan is validated using the physical and functional models of a particular agent, new robotic systems can be developed and integrated with existing operations in a robust manner. This technique also provides the capability to generate plans for crewmembers with varying skill levels, and later apply these same plans to more sophisticated robotic manipulators made available by evolutions in technology.
International exploration by independents
International Nuclear Information System (INIS)
Bertagne, R.G.
1992-01-01
Recent industry trends indicate that the smaller U.S. independents are looking at foreign exploration opportunities as one of the alternatives for growth in the new age of exploration. The problems of communications and logistics caused by different cultures and by geographic distances must be carefully evaluated. A mid-term to long-term strategy tailored to the goals and the financial capabilities of the company should be prepared and followed by a careful planning of the operations. This paper addresses some aspects of foreign exploration that should be considered before an independent venture into the foreign field. It also provides some guidelines for conducting successful overseas operations. When properly assessed, foreign exploration is well within the reach of smaller U.S. independents and presents no greater risk than domestic exploration; the rewards, however, can be much larger. Furthermore, the Oil and Gas Journal surveys of the 300 largest U.S. petroleum companies show that companies with a consistent foreign exploration policy have fared better financially during difficult times
Inquiry in bibliography some of the bustan`s maxim
Directory of Open Access Journals (Sweden)
sajjad rahmatian
2016-12-01
Full Text Available Sa`di is on of those poets who`s has placed a special position to preaching and guiding the people and among his works, allocated throughout the text of bustan to advice and maxim on legal and ethical various subjects. Surely, sa`di on the way of to compose this work and expression of its moral point, direct or indirect have been affected by some previous sources and possibly using their content. The main purpose of this article is that the pay review of basis and sources of bustan`s maxims and show that sa`di when expression the maxims of this work has been affected by which of the texts and works. For this purpose is tried to with search and research on the resources that have been allocated more or less to the aphorisms, to discover and extract traces of influence sa`di from their moral and didactic content. From the most important the finding of this study can be mentioned that indirect effect of some pahlavi books of maxim (like maxims of azarbad marespandan and bozorgmehr book of maxim and also noted sa`di directly influenced of moral and ethical works of poets and writers before him, and of this, sa`di`s influence from abo- shakur balkhi maxims, ferdowsi and keikavus is remarkable and noteworthy.
Can monkeys make investments based on maximized pay-off?
Directory of Open Access Journals (Sweden)
Sophie Steelandt
2011-03-01
Full Text Available Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella and thirteen macaques (Macaca fascicularis, Macaca tonkeana in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible.
Directory of Open Access Journals (Sweden)
M.G. Bara Filho
2008-01-01
Full Text Available Strength and flexibility are common components of a training program and their maximal values are obtained through specific tests. However, little information about the damage effect of these training procedures in a skeletal muscle is known. Objective: To verify a serum CK changes 24 h after a sub maximal stretching routine and after the static flexibility and maximal strength tests. Methods: the sample was composed by 14 subjects (man and women, 28 ± 6 yr. physical education students. The volunteers were divided in a control group (CG and experimental group (EG that was submitted in a stretching routine (EG-ST, in a maximal flexibility static test (EG-FLEX and in 1-RM test (EG-1-RM, with one week interval among tests. The anthropometrics characteristics were obtained by digital scale with stadiometer (Filizola, São Paulo, Brasil, 2002. The blood samples were obtained using the IFCC method with reference values 26-155 U/L. The De Lorme and Watkins technique was used to access maximal maximal strength through bench press and leg press. The maximal flexibility test consisted in three 20 seconds sets until the point of maximal discomfort. The stretching was done in normal movement amplitude during 6 secons. Results: The basal and post 24 h CK values in CG and EG (ST; Flex and 1 RM were respectively 195,0 ± 129,5 vs. 202,1 ± 124,2; 213,3 ± 133,2 vs. 174,7 ± 115,8; 213,3 ± 133,2 vs. 226,6 ± 126,7 e 213,3 ± 133,2 vs. 275,9 ± 157,2. It was only observed a significant difference (p = 0,02 in the pre and post values inGE-1RM. Conclusion: only maximal strength dynamic exercise was capable to cause skeletal muscle damage.
Gravitational collapse of charged dust shell and maximal slicing condition
International Nuclear Information System (INIS)
Maeda, Keiichi
1980-01-01
The maximal slicing condition is a good time coordinate condition qualitatively when pursuing the gravitational collapse by the numerical calculation. The analytic solution of the gravitational collapse under the maximal slicing condition is given in the case of a spherical charged dust shell and the behavior of time slices with this coordinate condition is investigated. It is concluded that under the maximal slicing condition we can pursue the gravitational collapse until the radius of the shell decreases to about 0.7 x (the radius of the event horizon). (author)
Optimal quantum error correcting codes from absolutely maximally entangled states
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
Breakdown of maximality conjecture in continuous phase transitions
International Nuclear Information System (INIS)
Mukamel, D.; Jaric, M.V.
1983-04-01
A Landau-Ginzburg-Wilson model associated with a single irreducible representation which exhibits an ordered phase whose symmetry group is not a maximal isotropy subgroup of the symmetry group of the disordered phase is constructed. This example disproves the maximality conjecture suggested in numerous previous studies. Below the (continuous) transition, the order parameter points along a direction which varies with the temperature and with the other parameters which define the model. An extension of the maximality conjecture to reducible representations was postulated in the context of Higgs symmetry breaking mechanism. Our model can also be extended to provide a counter example in these cases. (author)
International Nuclear Information System (INIS)
Brown, K.A.; Osbakken, M.; Boucher, C.A.; Strauss, H.W.; Pohost, G.M.; Okada, R.D.
1985-01-01
The incidence and causes of abnormal thallium-201 (TI-201) myocardial perfusion studies in the absence of significant coronary artery disease were examined. The study group consisted of 100 consecutive patients undergoing exercise TI-201 testing and coronary angiography who were found to have maximal coronary artery diameter narrowing of less than 50%. Maximal coronary stenosis ranged from 0 to 40%. The independent and relative influences of patient clinical, exercise and angiographic data were assessed by logistic regression analysis. Significant predictors of a positive stress TI-201 test result were: (1) percent maximal coronary stenosis (p less than 0.0005), (2) propranolol use (p less than 0.01), (3) interaction of propranolol use and percent maximal stenosis (p less than 0.005), and (4) stress-induced chest pain (p . 0.05). No other patient variable had a significant influence. Positive TI-201 test results were more common in patients with 21 to 40% maximal stenosis (59%) than in patients with 0 to 20% maximal stenosis (27%) (p less than 0.01). Among patients with 21 to 40% stenosis, a positive test response was more common when 85% of maximal predicted heart rate was achieved (75%) than when it was not (40%) (p less than 0.05). Of 16 nonapical perfusion defects seen in patients with 21 to 40% maximal stenosis, 14 were in the territory that corresponded with such a coronary stenosis. Patients taking propranolol were more likely to have a positive TI-201 test result (45%) than patients not taking propranolol (22%) (p less than 0.05)
Triangle-free graphs whose independence number equals the degree
DEFF Research Database (Denmark)
Brandt, Stephan
2010-01-01
In a triangle-free graph, the neighbourhood of every vertex is an independent set. We investigate the class S of triangle-free graphs where the neighbourhoods of vertices are maximum independent sets. Such a graph G must be regular of degree d = α (G) and the fractional chromatic number must sati...
Some Results on the Independence Polynomial of Unicyclic Graphs
Directory of Open Access Journals (Sweden)
Oboudi Mohammad Reza
2018-05-01
Full Text Available Let G be a simple graph on n vertices. An independent set in a graph is a set of pairwise non-adjacent vertices. The independence polynomial of G is the polynomial I(G,x=∑k=0ns(G,kxk$I(G,x = \\sum\
Ranking Specific Sets of Objects.
Maly, Jan; Woltran, Stefan
2017-01-01
Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.
Maximizing information exchange between complex networks
International Nuclear Information System (INIS)
West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo
2008-01-01
Science is not merely the smooth progressive interaction of hypothesis, experiment and theory, although it sometimes has that form. More realistically the scientific study of any given complex phenomenon generates a number of explanations, from a variety of perspectives, that eventually requires synthesis to achieve a deep level of insight and understanding. One such synthesis has created the field of out-of-equilibrium statistical physics as applied to the understanding of complex dynamic networks. Over the past forty years the concept of complexity has undergone a metamorphosis. Complexity was originally seen as a consequence of memory in individual particle trajectories, in full agreement with a Hamiltonian picture of microscopic dynamics and, in principle, macroscopic dynamics could be derived from the microscopic Hamiltonian picture. The main difficulty in deriving macroscopic dynamics from microscopic dynamics is the need to take into account the actions of a very large number of components. The existence of events such as abrupt jumps, considered by the conventional continuous time random walk approach to describing complexity was never perceived as conflicting with the Hamiltonian view. Herein we review many of the reasons why this traditional Hamiltonian view of complexity is unsatisfactory. We show that as a result of technological advances, which make the observation of single elementary events possible, the definition of complexity has shifted from the conventional memory concept towards the action of non-Poisson renewal events. We show that the observation of crucial processes, such as the intermittent fluorescence of blinking quantum dots as well as the brain's response to music, as monitored by a set of electrodes attached to the scalp, has forced investigators to go beyond the traditional concept of complexity and to establish closer contact with the nascent field of complex networks. Complex networks form one of the most challenging areas of modern
Maximizing information exchange between complex networks
West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo
2008-10-01
Science is not merely the smooth progressive interaction of hypothesis, experiment and theory, although it sometimes has that form. More realistically the scientific study of any given complex phenomenon generates a number of explanations, from a variety of perspectives, that eventually requires synthesis to achieve a deep level of insight and understanding. One such synthesis has created the field of out-of-equilibrium statistical physics as applied to the understanding of complex dynamic networks. Over the past forty years the concept of complexity has undergone a metamorphosis. Complexity was originally seen as a consequence of memory in individual particle trajectories, in full agreement with a Hamiltonian picture of microscopic dynamics and, in principle, macroscopic dynamics could be derived from the microscopic Hamiltonian picture. The main difficulty in deriving macroscopic dynamics from microscopic dynamics is the need to take into account the actions of a very large number of components. The existence of events such as abrupt jumps, considered by the conventional continuous time random walk approach to describing complexity was never perceived as conflicting with the Hamiltonian view. Herein we review many of the reasons why this traditional Hamiltonian view of complexity is unsatisfactory. We show that as a result of technological advances, which make the observation of single elementary events possible, the definition of complexity has shifted from the conventional memory concept towards the action of non-Poisson renewal events. We show that the observation of crucial processes, such as the intermittent fluorescence of blinking quantum dots as well as the brain’s response to music, as monitored by a set of electrodes attached to the scalp, has forced investigators to go beyond the traditional concept of complexity and to establish closer contact with the nascent field of complex networks. Complex networks form one of the most challenging areas of
Maximizing information exchange between complex networks
Energy Technology Data Exchange (ETDEWEB)
West, Bruce J. [Mathematical and Information Science, Army Research Office, Research Triangle Park, NC 27708 (United States); Physics Department, Duke University, Durham, NC 27709 (United States)], E-mail: bwest@nc.rr.com; Geneston, Elvis L. [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Physics Department, La Sierra University, 4500 Riverwalk Parkway, Riverside, CA 92515 (United States); Grigolini, Paolo [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Istituto di Processi Chimico Fisici del CNR, Area della Ricerca di Pisa, Via G. Moruzzi, 56124, Pisa (Italy); Dipartimento di Fisica ' E. Fermi' Universita' di Pisa, Largo Pontecorvo 3, 56127 Pisa (Italy)
2008-10-15
Science is not merely the smooth progressive interaction of hypothesis, experiment and theory, although it sometimes has that form. More realistically the scientific study of any given complex phenomenon generates a number of explanations, from a variety of perspectives, that eventually requires synthesis to achieve a deep level of insight and understanding. One such synthesis has created the field of out-of-equilibrium statistical physics as applied to the understanding of complex dynamic networks. Over the past forty years the concept of complexity has undergone a metamorphosis. Complexity was originally seen as a consequence of memory in individual particle trajectories, in full agreement with a Hamiltonian picture of microscopic dynamics and, in principle, macroscopic dynamics could be derived from the microscopic Hamiltonian picture. The main difficulty in deriving macroscopic dynamics from microscopic dynamics is the need to take into account the actions of a very large number of components. The existence of events such as abrupt jumps, considered by the conventional continuous time random walk approach to describing complexity was never perceived as conflicting with the Hamiltonian view. Herein we review many of the reasons why this traditional Hamiltonian view of complexity is unsatisfactory. We show that as a result of technological advances, which make the observation of single elementary events possible, the definition of complexity has shifted from the conventional memory concept towards the action of non-Poisson renewal events. We show that the observation of crucial processes, such as the intermittent fluorescence of blinking quantum dots as well as the brain's response to music, as monitored by a set of electrodes attached to the scalp, has forced investigators to go beyond the traditional concept of complexity and to establish closer contact with the nascent field of complex networks. Complex networks form one of the most challenging areas of
DEFF Research Database (Denmark)
Warming-Rasmussen, Bent; Quick, Reiner; Liempd, Dennis van
2011-01-01
In the wake of the financial crisis, the EU Commission has published a Green Paper on the future role of the audit function in Europe. The Green Paper lists a number of proposals for tighter rules for audits and auditors in order to contribute to stabilizing the financial system. The present...... article presents research contributions to the question whether the auditor is to continue to provide both audit and non-audit services (NAS) to an audit client. Research results show that this double function for the same audit client is a problem for stakeholders' confidence in auditor independence...
Reference Values for Maximal Inspiratory Pressure: A Systematic Review
Directory of Open Access Journals (Sweden)
Isabela MB Sclauser Pessoa
2014-01-01
Full Text Available BACKGROUND: Maximal inspiratory pressure (MIP is the most commonly used measure to evaluate inspiratory muscle strength. Normative values for MIP vary significantly among studies, which may reflect differences in participant demographics and technique of MIP measurement.
Classification of conformal representations induced from the maximal cuspidal parabolic
Energy Technology Data Exchange (ETDEWEB)
Dobrev, V. K., E-mail: dobrev@inrne.bas.bg [Scuola Internazionale Superiore di Studi Avanzati (Italy)
2017-03-15
In the present paper we continue the project of systematic construction of invariant differential operators on the example of representations of the conformal algebra induced from the maximal cuspidal parabolic.
Maximizing Your Investment in Building Automation System Technology.
Darnell, Charles
2001-01-01
Discusses how organizational issues and system standardization can be important factors that determine an institution's ability to fully exploit contemporary building automation systems (BAS). Further presented is management strategy for maximizing BAS investments. (GR)
Eccentric exercise decreases maximal insulin action in humans
DEFF Research Database (Denmark)
Asp, Svend; Daugaard, J R; Kristiansen, S
1996-01-01
subjects participated in two euglycaemic clamps, performed in random order. One clamp was preceded 2 days earlier by one-legged eccentric exercise (post-eccentric exercise clamp (PEC)) and one was without the prior exercise (control clamp (CC)). 2. During PEC the maximal insulin-stimulated glucose uptake...... for all three clamp steps used (P maximal activity of glycogen synthase was identical in the two thighs for all clamp steps. 3. The glucose infusion rate (GIR......) necessary to maintain euglycaemia during maximal insulin stimulation was lower during PEC compared with CC (15.7%, 81.3 +/- 3.2 vs. 96.4 +/- 8.8 mumol kg-1 min-1, P maximal...
Maximal slicing of D-dimensional spherically symmetric vacuum spacetime
International Nuclear Information System (INIS)
Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru
2009-01-01
We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D≥5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.
ICTs and Urban Micro Enterprises : Maximizing Opportunities for ...
International Development Research Centre (IDRC) Digital Library (Canada)
ICTs and Urban Micro Enterprises : Maximizing Opportunities for Economic Development ... the use of ICTs in micro enterprises and their role in reducing poverty. ... in its approach to technological connectivity but bottom-up in relation to.
Comparing BV solutions of rate independent processes
Czech Academy of Sciences Publication Activity Database
Krejčí, Pavel; Recupero, V.
2014-01-01
Roč. 21, č. 1 (2014), s. 121-146 ISSN 0944-6532 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : variational inequalities * rate independence * convex sets Subject RIV: BA - General Mathematics Impact factor: 0.552, year: 2014 http://www.heldermann.de/JCA/JCA21/JCA211/jca21006.htm
BV solutions of rate independent differential inclusions
Czech Academy of Sciences Publication Activity Database
Krejčí, Pavel; Recupero, V.
2014-01-01
Roč. 139, č. 4 (2014), s. 607-619 ISSN 0862-7959 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : differential inclusion * stop operator * rate independence * convex set Subject RIV: BA - General Mathematics http://hdl.handle.net/10338.dmlcz/144138
Nonadditive entropy maximization is inconsistent with Bayesian updating
Pressé, Steve
2014-11-01
The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.
Sex differences in autonomic function following maximal exercise.
Kappus, Rebecca M; Ranadive, Sushant M; Yan, Huimin; Lane-Cordova, Abbi D; Cook, Marc D; Sun, Peng; Harvey, I Shevon; Wilund, Kenneth R; Woods, Jeffrey A; Fernhall, Bo
2015-01-01
Heart rate variability (HRV), blood pressure variability, (BPV) and heart rate recovery (HRR) are measures that provide insight regarding autonomic function. Maximal exercise can affect autonomic function, and it is unknown if there are sex differences in autonomic recovery following exercise. Therefore, the purpose of this study was to determine sex differences in several measures of autonomic function and the response following maximal exercise. Seventy-one (31 males and 40 females) healthy, nonsmoking, sedentary normotensive subjects between the ages of 18 and 35 underwent measurements of HRV and BPV at rest and following a maximal exercise bout. HRR was measured at minute one and two following maximal exercise. Males have significantly greater HRR following maximal exercise at both minute one and two; however, the significance between sexes was eliminated when controlling for VO2 peak. Males had significantly higher resting BPV-low-frequency (LF) values compared to females and did not significantly change following exercise, whereas females had significantly increased BPV-LF values following acute maximal exercise. Although males and females exhibited a significant decrease in both HRV-LF and HRV-high frequency (HF) with exercise, females had significantly higher HRV-HF values following exercise. Males had a significantly higher HRV-LF/HF ratio at rest; however, both males and females significantly increased their HRV-LF/HF ratio following exercise. Pre-menopausal females exhibit a cardioprotective autonomic profile compared to age-matched males due to lower resting sympathetic activity and faster vagal reactivation following maximal exercise. Acute maximal exercise is a sufficient autonomic stressor to demonstrate sex differences in the critical post-exercise recovery period.
Maximally flat radiation patterns of a circular aperture
Minkovich, B. M.; Mints, M. Ia.
1989-08-01
The paper presents an explicit solution to the problems of maximizing the area utilization coefficient and of obtaining the best approximation (on the average) of a sectorial Pi-shaped radiation pattern of an antenna with a circular aperture when Butterworth conditions are imposed on the approximating pattern with the aim of flattening it. Constraints on the choice of admissible minimum and maximum antenna dimensions are determined which make possible the synthesis of maximally flat patterns with small sidelobes.
Design of optimal linear antennas with maximally flat radiation patterns
Minkovich, B. M.; Mints, M. Ia.
1990-02-01
The paper presents an explicit solution to the problem of maximizing the aperture area utilization coefficient and obtaining the best approximation in the mean of the sectorial U-shaped radiation pattern of a linear antenna, when Butterworth flattening constraints are imposed on the approximating pattern. Constraints are established on the choice of the smallest and large antenna dimensions that make it possible to obtain maximally flat patterns, having a low sidelobe level and free from pulsations within the main lobe.
No Mikheyev-Smirnov-Wolfenstein Effect in Maximal Mixing
Harrison, P. F.; Perkins, D. H.; Scott, W. G.
1996-01-01
We investigate the possible influence of the MSW effect on the expectations for the solar neutrino experiments in the maximal mixing scenario suggested by the atmospheric neutrino data. A direct numerical calculation of matter induced effects in the Sun shows that the naive vacuum predictions are left completely undisturbed in the particular case of maximal mixing, so that the MSW effect turns out to be unobservable. We give a qualitative explanation of this result.
A fractional optimal control problem for maximizing advertising efficiency
Igor Bykadorov; Andrea Ellero; Stefania Funari; Elena Moretti
2007-01-01
We propose an optimal control problem to model the dynamics of the communication activity of a firm with the aim of maximizing its efficiency. We assume that the advertising effort undertaken by the firm contributes to increase the firm's goodwill and that the goodwill affects the firm's sales. The aim is to find the advertising policies in order to maximize the firm's efficiency index which is computed as the ratio between "outputs" and "inputs" properly weighted; the outputs are represented...
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
On Maximally Dissipative Shock Waves in Nonlinear Elasticity
Knowles, James K.
2010-01-01
Shock waves in nonlinearly elastic solids are, in general, dissipative. We study the following question: among all plane shock waves that can propagate with a given speed in a given one-dimensional nonlinearly elastic bar, which one—if any—maximizes the rate of dissipation? We find that the answer to this question depends strongly on the qualitative nature of the stress-strain relation characteristic of the given material. When maximally dissipative shocks do occur, they propagate according t...
Maximal near-field radiative heat transfer between two plates
Nefzaoui, Elyes; Ezzahri, Younès; Drevillon, Jérémie; Joulain, Karl
2013-01-01
International audience; Near-field radiative transfer is a promising way to significantly and simultaneously enhance both thermo-photovoltaic (TPV) devices power densities and efficiencies. A parametric study of Drude and Lorentz models performances in maximizing near-field radiative heat transfer between two semi-infinite planes separated by nanometric distances at room temperature is presented in this paper. Optimal parameters of these models that provide optical properties maximizing the r...
Softly Broken Lepton Numbers: an Approach to Maximal Neutrino Mixing
International Nuclear Information System (INIS)
Grimus, W.; Lavoura, L.
2001-01-01
We discuss models where the U(1) symmetries of lepton numbers are responsible for maximal neutrino mixing. We pay particular attention to an extension of the Standard Model (SM) with three right-handed neutrino singlets in which we require that the three lepton numbers L e , L μ , and L τ be separately conserved in the Yukawa couplings, but assume that they are softly broken by the Majorana mass matrix M R of the neutrino singlets. In this framework, where lepton-number breaking occurs at a scale much higher than the electroweak scale, deviations from family lepton number conservation are calculable, i.e., finite, and lepton mixing stems exclusively from M R . We show that in this framework either maximal atmospheric neutrino mixing or maximal solar neutrino mixing or both can be imposed by invoking symmetries. In this way those maximal mixings are stable against radiative corrections. The model which achieves maximal (or nearly maximal) solar neutrino mixing assumes that there are two different scales in M R and that the lepton number (dash)L=L e -L μ -L τ 1 is conserved in between them. We work out the difference between this model and the conventional scenario where (approximate) (dash)L invariance is imposed directly on the mass matrix of the light neutrinos. (author)
DEFF Research Database (Denmark)
Steding-Ehrenborg, Katarina; Boushel, Robert C; Calbet, José A
2015-01-01
subjects (29 ± 4 years) underwent cardiac MR. All subjects underwent maximal exercise testing and for elderly subjects maximal cardiac output during cycling was determined using dye dilution technique. RESULTS: Longitudinal and radial contribution to stroke volume did not differ between groups......BACKGROUND: Age-related decline in cardiac function can be prevented or postponed by lifelong endurance training. However, effects of normal ageing as well as of lifelong endurance exercise on longitudinal and radial contribution to stroke volume are unknown. The aim of this study was to determine...... groups for RVAVPD (P = 0.2). LVAVPD was an independent predictor of maximal cardiac output (R(2 = ) 0.61, P groups. However, how longitudinal pumping...
Survival associated pathway identification with group Lp penalized global AUC maximization
Directory of Open Access Journals (Sweden)
Liu Zhenqiu
2010-08-01
Full Text Available Abstract It has been demonstrated that genes in a cell do not act independently. They interact with one another to complete certain biological processes or to implement certain molecular functions. How to incorporate biological pathways or functional groups into the model and identify survival associated gene pathways is still a challenging problem. In this paper, we propose a novel iterative gradient based method for survival analysis with group Lp penalized global AUC summary maximization. Unlike LASSO, Lp (p 1. We first extend Lp for individual gene identification to group Lp penalty for pathway selection, and then develop a novel iterative gradient algorithm for penalized global AUC summary maximization (IGGAUCS. This method incorporates the genetic pathways into global AUC summary maximization and identifies survival associated pathways instead of individual genes. The tuning parameters are determined using 10-fold cross validation with training data only. The prediction performance is evaluated using test data. We apply the proposed method to survival outcome analysis with gene expression profile and identify multiple pathways simultaneously. Experimental results with simulation and gene expression data demonstrate that the proposed procedures can be used for identifying important biological pathways that are related to survival phenotype and for building a parsimonious model for predicting the survival times.
The importance of board independence
Zijl, N.J.M.
2012-01-01
Although the attributed importance of board independence is high, a clear definition of independence does not exist. Furthermore, the aim and consequences of independence are the subject of discussion and empirical evidence about the impact of independence is weak and disputable. Despite this lack
Autonomy, Independence, Inclusion
Directory of Open Access Journals (Sweden)
Filippo Angelucci
2015-04-01
Full Text Available The living environment must not only meet the primary needs of living, but also the expectations of improvement of life and social relations and people’s work. The need for a living environment that responds to the needs of users with their different abilities, outside of standardizations, is increasingly felt as autonomy, independence and well-being are the result of real usability and adaptability of the spaces. The project to improve the inclusivity of living space and to promote the rehabilitation of fragile users need to be characterized as an interdisciplinary process in which the integration of specialized contributions leads to adaptive customization of space solutions and technological that evolve with the changing needs, functional capacities and abilities of individuals.
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)
Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan
2016-01-01
Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.
A Simulated Annealing method to solve a generalized maximal covering location problem
Directory of Open Access Journals (Sweden)
M. Saeed Jabalameli
2011-04-01
Full Text Available The maximal covering location problem (MCLP seeks to locate a predefined number of facilities in order to maximize the number of covered demand points. In a classical sense, MCLP has three main implicit assumptions: all or nothing coverage, individual coverage, and fixed coverage radius. By relaxing these assumptions, three classes of modelling formulations are extended: the gradual cover models, the cooperative cover models, and the variable radius models. In this paper, we develop a special form of MCLP which combines the characteristics of gradual cover models, cooperative cover models, and variable radius models. The proposed problem has many applications such as locating cell phone towers. The model is formulated as a mixed integer non-linear programming (MINLP. In addition, a simulated annealing algorithm is used to solve the resulted problem and the performance of the proposed method is evaluated with a set of randomly generated problems.
Throughput maximization for buffer-aided hybrid half-/full-duplex relaying with self-interference
Khafagy, Mohammad Galal
2015-06-01
In this work, we consider a two-hop cooperative setting where a source communicates with a destination through an intermediate relay node with a buffer. Unlike the existing body of work on buffer-aided half-duplex relaying, we consider a hybrid half-/full-duplex relaying scenario with loopback interference in the full-duplex mode. Depending on the channel outage and buffer states that are assumed available at the transmitters, the source and relay may either transmit simultaneously or revert to orthogonal transmission. Specifically, a joint source/relay scheduling and relaying mode selection mechanism is proposed to maximize the end-to-end throughput. The throughput maximization problem is converted to a linear program where the exact global optimal solution is efficiently obtained via standard convex/linear numerical optimization tools. Finally, the theoretical findings are corroborated with event-based simulations to provide the necessary performance validation.
Net returns, fiscal risks, and the optimal patient mix for a profit-maximizing hospital.
Ozatalay, S; Broyles, R
1987-10-01
As is well recognized, the provisions of PL98-21 not only transfer financial risks from the Medicare program to the hospital but also induce institutions to adjust the diagnostic mix of Medicare beneficiaries so as to maximize net income or minimize the net loss. This paper employs variation in the set of net returns as the sole measure of financial risk and develops a model that identifies the mix of beneficiaries that maximizes net income, subject to a given level of risk. The results indicate that the provisions of PL98-21 induce the institution to deny admission to elderly patients presenting conditions for which the net return is relatively low and the variance in the cost per case is large. Further, the paper suggests that the treatment of beneficiaries at a level commensurate with previous periods or the preferences of physicians may jeopardize the viability and solvency of Medicare-dependent hospitals.
Goldengorin, B.; Ghosh, D.
Maximization of submodular functions on a ground set is a NP-hard combinatorial optimization problem. Data correcting algorithms are among the several algorithms suggested for solving this problem exactly and approximately. From the point of view of Hasse diagrams data correcting algorithms use
Smith, Des H.V.; Converse, Sarah J.; Gibson, Keith; Moehrenschlager, Axel; Link, William A.; Olsen, Glenn H.; Maguire, Kelly
2011-01-01
Captive breeding is key to management of severely endangered species, but maximizing captive production can be challenging because of poor knowledge of species breeding biology and the complexity of evaluating different management options. In the face of uncertainty and complexity, decision-analytic approaches can be used to identify optimal management options for maximizing captive production. Building decision-analytic models requires iterations of model conception, data analysis, model building and evaluation, identification of remaining uncertainty, further research and monitoring to reduce uncertainty, and integration of new data into the model. We initiated such a process to maximize captive production of the whooping crane (Grus americana), the world's most endangered crane, which is managed through captive breeding and reintroduction. We collected 15 years of captive breeding data from 3 institutions and used Bayesian analysis and model selection to identify predictors of whooping crane hatching success. The strongest predictor, and that with clear management relevance, was incubation environment. The incubation period of whooping crane eggs is split across two environments: crane nests and artificial incubators. Although artificial incubators are useful for allowing breeding pairs to produce multiple clutches, our results indicate that crane incubation is most effective at promoting hatching success. Hatching probability increased the longer an egg spent in a crane nest, from 40% hatching probability for eggs receiving 1 day of crane incubation to 95% for those receiving 30 days (time incubated in each environment varied independently of total incubation period). Because birds will lay fewer eggs when they are incubating longer, a tradeoff exists between the number of clutches produced and egg hatching probability. We developed a decision-analytic model that estimated 16 to be the optimal number of days of crane incubation needed to maximize the number of
Czech Academy of Sciences Publication Activity Database
Vodička, R.; Mantič, V.; Roubíček, Tomáš
2014-01-01
Roč. 49, č. 12 (2014), s. 2933-2963 ISSN 0025-6455 R&D Projects: GA ČR GAP201/10/0357 Institutional support: RVO:61388998 Keywords : adhesive contact * debonding interface fracture * interface damage Subject RIV: BA - General Mathematics Impact factor: 1.949, year: 2014 http://link.springer.com/article/10.1007/s11012-014-0045-4?no-access=true
Zhang, J.; Timmermans, H.J.P.; Borgers, A.W.J.
2002-01-01
Existing activity-based models of transport demand typically assume an individual decision-making process. The focus on theories of individual decision making may be partially due to the lack of behaviorally oriented modeling methodologies for group decision making. Therefore, an attempt has been
International Nuclear Information System (INIS)
Hauptfuhrer, R.R.
1990-01-01
The recent history of Oryx provides invaluable lessons for those who plan future energy strategies, relates the author of this paper. When Oryx became an independent oil and gas company, its reserves were declining, its stock was selling below asset values, and the price of oil seemed stuck below $15 per barrel. The message from Oryx management to Oryx employees was: We are in charge of our own destiny. We are about to create our own future. Oryx had developed a new, positive corporate culture and the corporate credit required for growth. This paper points to two basic principles that have guided the metamorphosis in Oryx's performance. The first objective was to improve operational efficiency and to identify the right performance indicators to measure this improvement. It states that the most critical performance indicator for an exploration and production company must be replacement and expansion of reserves at a competitive replacement cost. Oryx has cut its finding costs from $12 to $5 per barrel, while the BP acquisition provided proven reserves at a cost of only $4 per barrel. Another performance indicator measures Oryx's standing in the financial markets
Independents' group posts loss
International Nuclear Information System (INIS)
Sanders, V.; Price, R.B.
1992-01-01
Low oil gas prices and special charges caused the group of 50 U.S. independent producers Oil and Gas Journal tracks to post a combined loss in first half 1992. The group logged a net loss of $53 million in the first half compared with net earnings of $354 million in first half 1991, when higher oil prices during the Persian Gulf crisis buoyed earnings in spite of crude oil and natural gas production declines. The combined loss in the first half follows a 45% drop in the group's earnings in 1991 and compares with the OGJ group of integrated oil companies whose first half 1992 income fell 47% from the prior year. Special charges, generally related to asset writedowns, accounted for most of the almost $560 million in losses posted by about the third of the group. Nerco Oil and Gas Inc., Vancouver, Wash., alone accounted for almost half that total with charges related to an asset writedown of $238 million in the first quarter. Despite the poor first half performance, the outlook is bright for sharply improved group earnings in the second half, assuming reasonably healthy oil and gas prices and increased production resulting from acquisitions and in response to those prices
Directory of Open Access Journals (Sweden)
Christopher M. Hogan
2011-01-01
Full Text Available Acute and chronic lung inflammation is associated with numerous important disease pathologies including asthma, chronic obstructive pulmonary disease and silicosis. Lung fibroblasts are a novel and important target of anti-inflammatory therapy, as they orchestrate, respond to, and amplify inflammatory cascades and are the key cell in the pathogenesis of lung fibrosis. Peroxisome proliferator-activated receptor gamma (PPARγ ligands are small molecules that induce anti-inflammatory responses in a variety of tissues. Here, we report for the first time that PPARγ ligands have potent anti-inflammatory effects on human lung fibroblasts. 2-cyano-3, 12-dioxoolean-1, 9-dien-28-oic acid (CDDO and 15-deoxy-Δ12,14-prostaglandin J2 (15d-PGJ2 inhibit production of the inflammatory mediators interleukin-6 (IL-6, monocyte chemoattractant protein-1 (MCP-1, COX-2, and prostaglandin (PGE2 in primary human lung fibroblasts stimulated with either IL-1β or silica. The anti-inflammatory properties of these molecules are not blocked by the PPARγ antagonist GW9662 and thus are largely PPARγ independent. However, they are dependent on the presence of an electrophilic carbon. CDDO and 15d-PGJ2, but not rosiglitazone, inhibited NF-κB activity. These results demonstrate that CDDO and 15d-PGJ2 are potent attenuators of proinflammatory responses in lung fibroblasts and suggest that these molecules should be explored as the basis for novel, targeted anti-inflammatory therapies in the lung and other organs.
Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for
Directory of Open Access Journals (Sweden)
Yoanna Arlina Kurnianingsih
2015-05-01
Full Text Available We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble and choice strategies (what gamble information influences choices within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning.We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61 to 80 years old were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic
Directory of Open Access Journals (Sweden)
Sung-Yul Kim
2018-02-01
Full Text Available This study suggests energy-independent architectural models for residential complexes through the production of solar-energy-based renewable energy. Daegu Metropolitan City, South Korea, was selected as the target area for the residential complex. An optimal location in the area was selected to maximize the production of solar-energy-based renewable energy. Then, several architectural design models were developed. Next, after analyzing the energy-use patterns of each design model, economic analyses were conducted considering the profits generated from renewable-energy use. In this way, the optimum residential building model was identified. For this site, optimal solar power generation efficiency was obtained when solar panels were installed at 25° angles. Thus, the sloped roof angles were set to 25°, and the average height of the internal space of the highest floor was set to 1.8 m. Based on this model, analyses were performed regarding energy self-sufficiency improvement and economics. It was verified that connecting solar power generation capacity from a zero-energy perspective considering the consumer’s amount of power consumption was more effective than connecting maximum solar power generation capacity according to building structure. Moreover, it was verified that selecting a subsidizable solar power generation capacity according to the residential solar power facility connection can maximize operational benefits.
Quantum speedup in solving the maximal-clique problem
Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang
2018-03-01
The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.
Maximization of regional probabilities using Optimal Surface Graphs
DEFF Research Database (Denmark)
Arias Lorza, Andres M.; Van Engelen, Arna; Petersen, Jens
2018-01-01
Purpose: We present a segmentation method that maximizes regional probabilities enclosed by coupled surfaces using an Optimal Surface Graph (OSG) cut approach. This OSG cut determines the globally optimal solution given a graph constructed around an initial surface. While most methods for vessel...... wall segmentation only use edge information, we show that maximizing regional probabilities using an OSG improves the segmentation results. We applied this to automatically segment the vessel wall of the carotid artery in magnetic resonance images. Methods: First, voxel-wise regional probability maps...... were obtained using a Support Vector Machine classifier trained on local image features. Then, the OSG segments the regions which maximizes the regional probabilities considering smoothness and topological constraints. Results: The method was evaluated on 49 carotid arteries from 30 subjects...
El culto de Maximón en Guatemala
Pédron‑Colombani, Sylvie
2009-01-01
Este artículo se enfoca en la figura de Maximón, deidad sincrética de Guatemala, en un contexto de desplazamiento de la religión católica popular por parte de las iglesias protestantes. Esta divinidad híbrida a la cual se agregan santos católicos como Judas Iscariote o el dios maya Mam, permite la apropiación de Maximón por segmentos diferenciados de la población (tanto indígena como mestiza). Permite igualmente ser símbolo de protestas sociales enmascaradas cuando se asocia Maximón con figur...
Maximal Electric Dipole Moments of Nuclei with Enhanced Schiff Moments
Ellis, John; Pilaftsis, Apostolos
2011-01-01
The electric dipole moments (EDMs) of heavy nuclei, such as 199Hg, 225Ra and 211Rn, can be enhanced by the Schiff moments induced by the presence of nearby parity-doublet states. Working within the framework of the maximally CP-violating and minimally flavour-violating (MCPMFV) version of the MSSM, we discuss the maximal values that such EDMs might attain, given the existing experimental constraints on the Thallium, neutron and Mercury EDMs. The maximal EDM values of the heavy nuclei are obtained with the help of a differential-geometrical approach proposed recently that enables the maxima of new CP-violating observables to be calculated exactly in the linear approximation. In the case of 225Ra, we find that its EDM may be as large as 6 to 50 x 10^{-27} e.cm.
Maximal and anaerobic threshold cardiorespiratory responses during deepwater running
Directory of Open Access Journals (Sweden)
Ana Carolina Kanitz
2014-12-01
Full Text Available DOI: http://dx.doi.org/10.5007/1980-0037.2015v17n1p41 Aquatic exercises provide numerous benefits to the health of their practitioners. To secure these benefits, it is essential to have proper prescriptions to the needs of each individual and, therefore, it is important to study the cardiorespiratory responses of different activities in this environment. Thus, the aim of this study was to compare the cardiorespiratory responses at the anaerobic threshold (AT between maximal deep-water running (DWR and maximal treadmill running (TMR. In addition, two methods of determining the AT (the heart rate deflection point [HRDP] and ventilatory method [VM] are compared in the two evaluated protocols. Twelve young women performed the two maximal protocols. Two-factor ANOVA for repeated measures with a post-hoc Bonferroni test was used (α < 0.05. Significantly higher values of maximal heart rate (TMR: 33.7 ± 3.9; DWR: 22.5 ± 4.1 ml.kg−1.min−1 and maximal oxygen uptake (TMR: 33.7 ± 3.9; DWR: 22.5 ± 4.1 ml.kg−1.min−1 in TMR compared to the DWR were found. Furthermore, no significant differences were found between the methods for determining the AT (TMR: VM: 28.1 ± 5.3, HRDP: 26.6 ± 5.5 ml.kg−1.min−1; DWR: VM: 18.7 ± 4.8, HRDP: 17.8 ± 4.8 ml.kg−1.min−1. The results indicate that a specific maximal test for the trained modality should be conducted and the HRDP can be used as a simple and practical method of determining the AT, based on which the training intensity can be determined
Commonwealth of (Independent States
Directory of Open Access Journals (Sweden)
Vrućinić Dušan
2013-01-01
Full Text Available Following the stages from the establishment itself to the present day of the functioning of such a specific regional organization as the Commonwealth of Independent States (CIS, the article seeks to further explain the meaning of its existence, efficiency and functioning. The CIS was created in order to make the dissolution of a major world super-power, which throughout the 20th century together with the USA defined the bipolar world, as painless as possible, especially for the new countries and its nationally and ethnically diverse population. During the early years after the dissolution of the USSR, the CIS played a major role in a more flexible and less severe dissolution of the Soviet empire, alleviating the consequences for its people. A more efficient functioning among the republics in all fields was also one of the tasks of the Commonwealth, to which it was devoted to the extent which was permitted by the then, not too favourable circumstances. Difficult years of economic crisis did not allow the CIS to mutually integrate its members as much as possible on the economy level. Thanks to the economic recovery of the post-Soviet states in the early 21st century, the Commonwealth has also been transformed, reformed, and renewed, and all this in order to achieve better and more fruitful cooperation between the members. The CIS may serve as a proper example of how the former Soviet Union states are inextricably linked by social, security-political, economic, cultural, communication-transport, and other ties, thanks to the centuries-long existence of the peoples of these states in this area, despite both internal and external factors which occasionally, but temporarily halt the post-Soviet integration. Mathematically expressed, the CIS members are naturally predisposed, to be reciprocally depended on each other, just as they also have the capacity for successful cooperation in the future times and epochs brought on by the modern world.
Experimental Implementation of a Kochen-Specker Set of Quantum Tests
Directory of Open Access Journals (Sweden)
Vincenzo D’Ambrosio
2013-02-01
Full Text Available The conflict between classical and quantum physics can be identified through a series of yes-no tests on quantum systems, without it being necessary that these systems be in special quantum states. Kochen-Specker (KS sets of yes-no tests have this property and provide a quantum-versus-classical advantage that is free of the initialization problem that affects some quantum computers. Here, we report the first experimental implementation of a complete KS set that consists of 18 yes-no tests on four-dimensional quantum systems and show how to use the KS set to obtain a state-independent quantum advantage. We first demonstrate the unique power of this KS set for solving a task while avoiding the problem of state initialization. Such a demonstration is done by showing that, for 28 different quantum states encoded in the orbital-angular-momentum and polarization degrees of freedom of single photons, the KS set provides an impossible-to-beat solution. In a second experiment, we generate maximally contextual quantum correlations by performing compatible sequential measurements of the polarization and path of single photons. In this case, state independence is demonstrated for 15 different initial states. Maximum contextuality and state independence follow from the fact that the sequences of measurements project any initial quantum state onto one of the KS set’s eigenstates. Our results show that KS sets can be used for quantum-information processing and quantum computation and pave the way for future developments.
Efficient maximal Poisson-disk sampling and remeshing on surfaces
Guo, Jianwei; Yan, Dongming; Jia, Xiaohong; Zhang, Xiaopeng
2015-01-01
Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.
Efficient maximal Poisson-disk sampling and remeshing on surfaces
Guo, Jianwei
2015-02-01
Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.
Identities on maximal subgroups of GLn(D)
International Nuclear Information System (INIS)
Kiani, D.; Mahdavi-Hezavehi, M.
2002-04-01
Let D be a division ring with centre F. Assume that M is a maximal subgroup of GL n (D), n≥1 such that Z(M) is algebraic over F. Group identities on M and polynomial identities on the F-linear hull F[M] are investigated. It is shown that if F[M] is a PI-algebra, then [D:F] n (D) and M is a maximal subgroup of N. If M satisfies a group identity, it is shown that M is abelian-by-finite. (author)
Instantons and Gribov copies in the maximally Abelian gauge
International Nuclear Information System (INIS)
Bruckmann, F.; Heinzl, T.; Wipf, A.; Tok, T.
2000-01-01
We calculate the Faddeev-Popov operator corresponding to the maximally Abelian gauge for gauge group SU(N). Specializing to SU(2) we look for explicit zero modes of this operator. Within an illuminating toy model (Yang-Mills mechanics) the problem can be completely solved and understood. In the field theory case we are able to find an analytic expression for a normalizable zero mode in the background of a single 't Hooft instanton. Accordingly, such an instanton corresponds to a horizon configuration in the maximally Abelian gauge. Possible physical implications are discussed
Determinants of maximal oxygen uptake in severe acute hypoxia
DEFF Research Database (Denmark)
Calbet, J A L; Boushel, Robert Christopher; Rådegran, G
2003-01-01
To unravel the mechanisms by which maximal oxygen uptake (VO2 max) is reduced with severe acute hypoxia in humans, nine Danish lowlanders performed incremental cycle ergometer exercise to exhaustion, while breathing room air (normoxia) or 10.5% O2 in N2 (hypoxia, approximately 5,300 m above sea......: 1) reduction of PiO2, 2) impairment of pulmonary gas exchange, and 3) reduction of maximal cardiac output and peak leg blood flow, each explaining about one-third of the loss in VO2 max....
Anatomy of maximal stop mixing in the MSSM
International Nuclear Information System (INIS)
Bruemmer, Felix; Kraml, Sabine; Kulkarni, Suchita
2012-05-01
A Standard Model-like Higgs near 125 GeV in the MSSM requires multi-TeV stop masses, or a near-maximal contribution to its mass from stop mixing. We investigate the maximal mixing scenario, and in particular its prospects for being realized it in potentially realistic GUT models. We work out constraints on the possible GUT-scale soft terms, which we compare with what can be obtained from some well-known mechanisms of SUSY breaking mediation. Finally, we analyze two promising scenarios in detail, namely gaugino mediation and gravity mediation with non-universal Higgs masses.
Maximization of Tsallis entropy in the combinatorial formulation
International Nuclear Information System (INIS)
Suyari, Hiroki
2010-01-01
This paper presents the mathematical reformulation for maximization of Tsallis entropy S q in the combinatorial sense. More concretely, we generalize the original derivation of Maxwell-Boltzmann distribution law to Tsallis statistics by means of the corresponding generalized multinomial coefficient. Our results reveal that maximization of S 2-q under the usual expectation or S q under q-average using the escort expectation are naturally derived from the combinatorial formulations for Tsallis statistics with respective combinatorial dualities, that is, one for additive duality and the other for multiplicative duality.
Anatomy of maximal stop mixing in the MSSM
Energy Technology Data Exchange (ETDEWEB)
Bruemmer, Felix [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Kraml, Sabine; Kulkarni, Suchita [CNRS/IN2P3, INPG, Grenoble (France). Laboratoire de Physique Subatomique et de Cosmologie
2012-05-15
A Standard Model-like Higgs near 125 GeV in the MSSM requires multi-TeV stop masses, or a near-maximal contribution to its mass from stop mixing. We investigate the maximal mixing scenario, and in particular its prospects for being realized it in potentially realistic GUT models. We work out constraints on the possible GUT-scale soft terms, which we compare with what can be obtained from some well-known mechanisms of SUSY breaking mediation. Finally, we analyze two promising scenarios in detail, namely gaugino mediation and gravity mediation with non-universal Higgs masses.
The Large Margin Mechanism for Differentially Private Maximization
Chaudhuri, Kamalika; Hsu, Daniel; Song, Shuang
2014-01-01
A basic problem in the design of privacy-preserving algorithms is the private maximization problem: the goal is to pick an item from a universe that (approximately) maximizes a data-dependent function, all under the constraint of differential privacy. This problem has been used as a sub-routine in many privacy-preserving algorithms for statistics and machine-learning. Previous algorithms for this problem are either range-dependent---i.e., their utility diminishes with the size of the universe...
Media independence and dividend policy
DEFF Research Database (Denmark)
Farooq, Omar; Dandoune, Salma
2012-01-01
independence and dividend policies in emerging markets. Using a dataset from twenty three emerging markets, we show a significantly negative relationship between dividend policies (payout ratio and decision to pay dividend) and media independence. We argue that independent media reduces information asymmetries...... for stock market participants. Consequently, stock market participants in emerging markets with more independent media do not demand as high and as much dividends as their counterparts in emerging markets with less independent media. We also show that press independence is more important in defining......Can media pressurize managers to disgorge excess cash to shareholders? Do firms in countries with more independent media follow different dividend policies than firms with less independent media? This paper seeks to answer these questions and aims to document the relationship between media...
Energy Technology Data Exchange (ETDEWEB)
Patil, Chinmaya; Naghshtabrizi, Payam; Verma, Rajeev; Tang, Zhijun; Smith, Kandler; Shi, Ying
2016-08-01
This paper presents a control strategy to maximize fuel economy of a parallel hybrid electric vehicle over a target life of the battery. Many approaches to maximizing fuel economy of parallel hybrid electric vehicle do not consider the effect of control strategy on the life of the battery. This leads to an oversized and underutilized battery. There is a trade-off between how aggressively to use and 'consume' the battery versus to use the engine and consume fuel. The proposed approach addresses this trade-off by exploiting the differences in the fast dynamics of vehicle power management and slow dynamics of battery aging. The control strategy is separated into two parts, (1) Predictive Battery Management (PBM), and (2) Predictive Power Management (PPM). PBM is the higher level control with slow update rate, e.g. once per month, responsible for generating optimal set points for PPM. The considered set points in this paper are the battery power limits and State Of Charge (SOC). The problem of finding the optimal set points over the target battery life that minimize engine fuel consumption is solved using dynamic programming. PPM is the lower level control with high update rate, e.g. a second, responsible for generating the optimal HEV energy management controls and is implemented using model predictive control approach. The PPM objective is to find the engine and battery power commands to achieve the best fuel economy given the battery power and SOC constraints imposed by PBM. Simulation results with a medium duty commercial hybrid electric vehicle and the proposed two-level hierarchical control strategy show that the HEV fuel economy is maximized while meeting a specified target battery life. On the other hand, the optimal unconstrained control strategy achieves marginally higher fuel economy, but fails to meet the target battery life.
Food systems in correctional settings
DEFF Research Database (Denmark)
Smoyer, Amy; Kjær Minke, Linda
management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research....
Influence of variable resistance loading on subsequent free weight maximal back squat performance.
Mina, Minas A; Blazevich, Anthony J; Giakas, Giannis; Kay, Anthony D
2014-10-01
The purpose of the study was to determine the potentiating effects of variable resistance (VR) exercise during a warm-up on subsequent free-weight resistance (FWR) maximal squat performance. In the first session, 16 recreationally active men (age = 26.0 ± 7.8 years; height = 1.7 ± 0.2 m; mass = 82.6 ± 12.7 kg) were familiarized with the experimental protocols and tested for 1 repetition maximum (1RM) squat lift. The subjects then visited the laboratory on 2 further occasions under either control or experimental conditions. During these conditions, 2 sets of 3 repetitions of either FWR (control) or VR (experimental) squat lifts at 85% of 1RM were performed; during the experimental condition, 35% of the load was generated from band tension. After a 5-minute rest, 1RM, 3D knee joint kinematics, and vastus medialis, vastus lateralis, rectus femoris, and semitendinosus electromyogram (EMG) signals were recorded simultaneously. No subject increased 1RM after FWR, however, 13 of 16 (81%) subjects increased 1RM after VR (mean = 7.7%; p 0.05) or EMG amplitudes (mean = 5.9%; p > 0.05) occurred. Preconditioning using VR significantly increased 1RM without detectable changes in knee extensor muscle activity or knee flexion angle, although eccentric and concentric velocities were reduced. Thus, VR seems to potentiate the neuromuscular system to enhance subsequent maximal lifting performance. Athletes could thus use VR during warm-up routines to maximize squat performance.
Energy localization in maximally entangled two- and three-qubit phase space
International Nuclear Information System (INIS)
Pashaev, Oktay K; Gurkan, Zeynep N
2012-01-01
Motivated by the Möbius transformation for symmetric points under the generalized circle in the complex plane, the system of symmetric spin coherent states corresponding to antipodal qubit states is introduced. In terms of these states, we construct the maximally entangled complete set of two-qubit coherent states, which in the limiting cases reduces to the Bell basis. A specific property of our symmetric coherent states is that they never become unentangled for any value of ψ from the complex plane. Entanglement quantifications of our states are given by the reduced density matrix and the concurrence determinant, and it is shown that our basis is maximally entangled. Universal one- and two-qubit gates in these new coherent state basis are calculated. As an application, we find the Q symbol of the XY Z model Hamiltonian operator H as an average energy function in maximally entangled two- and three-qubit phase space. It shows regular finite-energy localized structure with specific local extremum points. The concurrence and fidelity of quantum evolution with dimerization of double periodic patterns are given. (paper)
Stable Chimeras and Independently Synchronizable Clusters
Cho, Young Sul; Nishikawa, Takashi; Motter, Adilson E.
2017-08-01
Cluster synchronization is a phenomenon in which a network self-organizes into a pattern of synchronized sets. It has been shown that diverse patterns of stable cluster synchronization can be captured by symmetries of the network. Here, we establish a theoretical basis to divide an arbitrary pattern of symmetry clusters into independently synchronizable cluster sets, in which the synchronization stability of the individual clusters in each set is decoupled from that in all the other sets. Using this framework, we suggest a new approach to find permanently stable chimera states by capturing two or more symmetry clusters—at least one stable and one unstable—that compose the entire fully symmetric network.
Maximal saddle solution of a nonlinear elliptic equation involving the ...
Indian Academy of Sciences (India)
College of Mathematics and Econometrics, Hunan University, Changsha 410082, China. E-mail: huahuiyan@163.com; duzr@hnu.edu.cn. MS received 3 September 2012; revised 20 December 2012. Abstract. A saddle solution is called maximal saddle solution if its absolute value is not smaller than those absolute values ...
Quantitative approaches for profit maximization in direct marketing
van der Scheer, H.R.
1998-01-01
An effective direct marketing campaign aims at selecting those targets, offer and communication elements - at the right time - that maximize the net profits. The list of individuals to be mailed, i.e. the targets, is considered to be the most important component. Therefore, a large amount of direct
An optimal thermal condition for maximal chlorophyll extraction
Directory of Open Access Journals (Sweden)
Fu Jia-Jia
2017-01-01
Full Text Available This work describes an environmentally friendly process for chlorophyll extraction from bamboo leaves. Shaking water bath and ultrasound cleaner are adopted in this technology, and the influence of temperature of the water bath and ultrasonic cleaner is evaluated. Results indicated that there is an optimal condition for maximal yield of chlorophyll.
Maximal multiplier operators in Lp(·)(Rn) spaces
Czech Academy of Sciences Publication Activity Database
Gogatishvili, Amiran; Kopaliani, T.
2016-01-01
Roč. 140, č. 4 (2016), s. 86-97 ISSN 0007-4497 R&D Projects: GA ČR GA13-14743S Institutional support: RVO:67985840 Keywords : spherical maximal function * variable Lebesque spaces * boundedness result Subject RIV: BA - General Mathematics Impact factor: 0.750, year: 2016 http://www.sciencedirect.com/science/article/pii/S0007449715000329
Half-maximal supersymmetry from exceptional field theory
Energy Technology Data Exchange (ETDEWEB)
Malek, Emanuel [Arnold Sommerfeld Center for Theoretical Physics, Department fuer Physik, Ludwig-Maximilians-Universitaet Muenchen (Germany)
2017-10-15
We study D ≥ 4-dimensional half-maximal flux backgrounds using exceptional field theory. We define the relevant generalised structures and also find the integrability conditions which give warped half-maximal Minkowski{sub D} and AdS{sub D} vacua. We then show how to obtain consistent truncations of type II / 11-dimensional SUGRA which break half the supersymmetry. Such truncations can be defined on backgrounds admitting exceptional generalised SO(d - 1 - N) structures, where d = 11 - D, and N is the number of vector multiplets obtained in the lower-dimensional theory. Our procedure yields the most general embedding tensors satisfying the linear constraint of half-maximal gauged SUGRA. We use this to prove that all D ≥ 4 half-maximal warped AdS{sub D} and Minkowski{sub D} vacua of type II / 11-dimensional SUGRA admit a consistent truncation keeping only the gravitational supermultiplet. We also show to obtain heterotic double field theory from exceptional field theory and comment on the M-theory / heterotic duality. In five dimensions, we find a new SO(5, N) double field theory with a (6 + N)-dimensional extended space. Its section condition has one solution corresponding to 10-dimensional N = 1 supergravity and another yielding six-dimensional N = (2, 0) SUGRA. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Local Hamiltonians for maximally multipartite-entangled states
Facchi, P.; Florio, G.; Pascazio, S.; Pepe, F.
2010-10-01
We study the conditions for obtaining maximally multipartite-entangled states (MMESs) as nondegenerate eigenstates of Hamiltonians that involve only short-range interactions. We investigate small-size systems (with a number of qubits ranging from 3 to 5) and show some example Hamiltonians with MMESs as eigenstates.
Local Hamiltonians for maximally multipartite-entangled states
International Nuclear Information System (INIS)
Facchi, P.; Florio, G.; Pascazio, S.; Pepe, F.
2010-01-01
We study the conditions for obtaining maximally multipartite-entangled states (MMESs) as nondegenerate eigenstates of Hamiltonians that involve only short-range interactions. We investigate small-size systems (with a number of qubits ranging from 3 to 5) and show some example Hamiltonians with MMESs as eigenstates.
Submaximal exercise capacity and maximal power output in polio subjects
Nollet, F.; Beelen, A.; Sargeant, A. J.; de Visser, M.; Lankhorst, G. J.; de Jong, B. A.
2001-01-01
OBJECTIVES: To compare the submaximal exercise capacity of polio subjects with postpoliomyelitis syndrome (PPS) and without (non-PPS) with that of healthy control subjects, to investigate the relationship of this capacity with maximal short-term power and quadriceps strength, and to evaluate
Maximizing car insurance online sales by developing superior webshop
Pylväs, Paula
2014-01-01
The purpose of this thesis work was to investigate what kind of webshop and what kind of improvements would increase customer satisfaction and maximize car insurance online sales by volume and by value. Main measure for this is the conversion rate, percentage of the potential buyers entering the site who actually make a purchase.
Principle of Entropy Maximization for Nonequilibrium Steady States
DEFF Research Database (Denmark)
Shapiro, Alexander; Stenby, Erling Halfdan
2002-01-01
The goal of this contribution is to find out to what extent the principle of entropy maximization, which serves as a basis for the equilibrium thermodynamics, may be generalized onto non-equilibrium steady states. We prove a theorem that, in the system of thermodynamic coordinates, where entropy...
Transformation of bipartite non-maximally entangled states into a ...
Indian Academy of Sciences (India)
We present two schemes for transforming bipartite non-maximally entangled states into a W state in cavity QED system, by using highly detuned interactions and the resonant interactions between two-level atoms and a single-mode cavity field. A tri-atom W state can be generated by adjusting the interaction times between ...
Maximal exercise performance in patients with postcancer fatigue
Prinsen, H.; Hopman, M. T. E.; Zwarts, M. J.; Leer, J. W. H.; Heerschap, A.; Bleijenberg, G.; van Laarhoven, H. W. M.
2013-01-01
The aim of this study is to examine whether physical fitness of severely fatigued and non-fatigued cancer survivors, as measured by maximal exercise performance, is different between both groups and, if so, whether this difference can be explained by differences in physical activity, self-efficacy
The Boundary Crossing Theorem and the Maximal Stability Interval
Directory of Open Access Journals (Sweden)
Jorge-Antonio López-Renteria
2011-01-01
useful tools in the study of the stability of family of polynomials. Although both of these theorem seem intuitively obvious, they can be used for proving important results. In this paper, we give generalizations of these two theorems and we apply such generalizations for finding the maximal stability interval.
Modifying Softball for Maximizing Learning Outcomes in Physical Education
Brian, Ali; Ward, Phillip; Goodway, Jacqueline D.; Sutherland, Sue
2014-01-01
Softball is taught in many physical education programs throughout the United States. This article describes modifications that maximize learning outcomes and that address the National Standards and safety recommendations. The modifications focus on tasks and equipment, developmentally appropriate motor-skill acquisition, increasing number of…
Do Speakers and Listeners Observe the Gricean Maxim of Quantity?
Engelhardt, Paul E.; Bailey, Karl G. D.; Ferreira, Fernanda
2006-01-01
The Gricean Maxim of Quantity is believed to govern linguistic performance. Speakers are assumed to provide as much information as required for referent identification and no more, and listeners are believed to expect unambiguous but concise descriptions. In three experiments we examined the extent to which naive participants are sensitive to the…
Extract of Zanthoxylum bungeanum maxim seed oil reduces ...
African Journals Online (AJOL)
Purpose: To investigate the anti-hyperlipidaemic effect of extract of Zanthoxylum bungeanum Maxim. seed oil (EZSO) on high-fat diet (HFD)-induced hyperlipidemic hamsters. Methods: Following feeding with HFD for 30 days, hyperlipidemic hamsters were intragastrically treated with EZSO for 60 days. Serum levels of ...
Dynamical generation of maximally entangled states in two identical cavities
International Nuclear Information System (INIS)
Alexanian, Moorad
2011-01-01
The generation of entanglement between two identical coupled cavities, each containing a single three-level atom, is studied when the cavities exchange two coherent photons and are in the N=2,4 manifolds, where N represents the maximum number of photons possible in either cavity. The atom-photon state of each cavity is described by a qutrit for N=2 and a five-dimensional qudit for N=4. However, the conservation of the total value of N for the interacting two-cavity system limits the total number of states to only 4 states for N=2 and 8 states for N=4, rather than the usual 9 for two qutrits and 25 for two five-dimensional qudits. In the N=2 manifold, two-qutrit states dynamically generate four maximally entangled Bell states from initially unentangled states. In the N=4 manifold, two-qudit states dynamically generate maximally entangled states involving three or four states. The generation of these maximally entangled states occurs rather rapidly for large hopping strengths. The cavities function as a storage of periodically generated maximally entangled states.
MAXIMIZING OPTO-ELASTIC INTERACTION USING TOPOLOGY OPTIMIZATION
DEFF Research Database (Denmark)
Gersborg, Allan Roulund; Sigmund, Ole
. Secondly, there is the photo-elastic effect which changes the refractive index through Pockel's coefficients as the material is strained. For the case of transverse electric modes, we study how the two effects change the material distribution which maximizes the change in the optical transmission...
Influence of Lumber Volume Maximization in Sawing Hardwood Sawlogs
Philip H. Steele; Francis G. Wagner; Lalit Kumar; Philip A. Araman
1993-01-01
The Best Opening Face (BOF) technology for volume maximization during sawing has been rapidly adopted by softwood sawmills. Application of this technology in hardwood sawmills has been limited because of their emphasis on sawing for the highest possible grade of lumber. The reason for this emphasis is that there is a relatively large difference in price between the...
Should I Stay or Should I Go? Maximizers versus Satisficers
Buri, John R.; Gunty, Amy; King, Stephanie L.
2008-01-01
In the present study, university students were presented a scenario in which a married couple was struggling in their marriage. These students were asked how likely it is that they would stay in a difficult marriage like the one described in the scenario. Each student also completed Schwartz's (2004) Maximization Scale. High scorers on this scale…
Transformation of bipartite non-maximally entangled states into a ...
Indian Academy of Sciences (India)
We present two schemes for transforming bipartite non-maximally entangled states into a W state in cavity QED system, by using highly detuned interactions and the resonant interactions between ... Proceedings of the International Workshop/Conference on Computational Condensed Matter Physics and Materials Science
Assessment of maximal handgrip strength : How many attempts are needed?
Reijnierse, Esmee M.; de Jong, Nynke; Trappenburg, Marijke C.; Blauw, Gerard Jan; Butler-Browne, Gillian; Gapeyeva, Helena; Hogrel, Jean Yves; Mcphee, Jamie S.; Narici, Marco V.; Sipilä, Sarianna; Stenroth, Lauri; van Lummel, Rob C.; Pijnappels, Mirjam; Meskers, Carel G M; Maier, Andrea B.
Background: Handgrip strength (HGS) is used to identify individuals with low muscle strength (dynapenia). The influence of the number of attempts on maximal HGS is not yet known and may differ depending on age and health status. This study aimed to assess how many attempts of HGS are required to
Throughput maximization of parcel sorter systems by scheduling inbound containers
Haneyah, S.W.A.; Schutten, Johannes M.J.; Fikse, K.; Clausen, Uwe; ten Hompel, Michael; Meier, J. Fabian
2013-01-01
This paper addresses the inbound container scheduling problem for automated sorter systems in express parcel sorting. The purpose is to analyze which container scheduling approaches maximize the throughput of sorter systems. We build on existing literature, particularly on the dynamic load balancing
Prediction of maximal heart rate: comparison using a novel and ...
African Journals Online (AJOL)
Prediction of maximal heart rate: comparison using a novel and conventional equation. LR Keytel, E Mukwevho, MA Will, M Lambert. Abstract. No Abstract. African Journal for Physical, Health Education, Recreation and Dance Vol. 11(3) 2005: 269-277. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL ...
Aspects of multiuser MIMO for cell throughput maximization
DEFF Research Database (Denmark)
Bauch, Gerhard; Tejera, Pedro; Guthy, Christian
2007-01-01
We consider a multiuser MIMO downlink scenario where the resources in time, frequency and space are allocated such that the total cell throughput is maximized. This is achieved by exploiting multiuser diversity, i.e. the physical resources are allocated to the user with the highest SNR. We assume...
PROFIT-MAXIMIZING PRINCIPLES, INSTRUCTIONAL UNITS FOR VOCATIONAL AGRICULTURE.
BARKER, RICHARD L.
THE PURPOSE OF THIS GUIDE IS TO ASSIST VOCATIONAL AGRICULTURE TEACHERS IN STIMULATING JUNIOR AND SENIOR HIGH SCHOOL STUDENT THINKING, UNDERSTANDING, AND DECISION MAKING AS ASSOCIATED WITH PROFIT-MAXIMIZING PRINCIPLES OF FARM OPERATION FOR USE IN FARM MANAGEMENT. IT WAS DEVELOPED UNDER A U.S. OFFICE OF EDUCATION GRANT BY TEACHER-EDUCATORS, A FARM…
How Managerial Ownership Affects Profit Maximization in Newspaper Firms.
Busterna, John C.
1989-01-01
Explores whether different levels of a manager's ownership of a newspaper affects the manager's profit maximizing attitudes and behavior. Finds that owner-managers tend to place less emphasis on profits than non-owner-controlled newspapers, contrary to economic theory and empirical evidence from other industries. (RS)
The Profit-Maximizing Firm: Old Wine in New Bottles.
Felder, Joseph
1990-01-01
Explains and illustrates a simplified use of graphical analysis for analyzing the profit-maximizing firm. Believes that graphical analysis helps college students gain a deeper understanding of marginalism and an increased ability to formulate economic problems in marginalist terms. (DB)
Discussion on: "Profit Maximization of a Power Plant"
DEFF Research Database (Denmark)
Boomsma (fhv. Kristoffersen), Trine Krogh; Fleten, Stein-Erik
2012-01-01
Kragelund et al. provides an interesting contribution to operations scheduling in liberalized electricity markets. They address the problem of profit maximization for a power plant participating in the electricity market. In particular, given that the plant has already been dispatched in a day...
A decision theoretic framework for profit maximization in direct marketing
Muus, L.; van der Scheer, H.; Wansbeek, T.J.; Montgomery, A.; Franses, P.H.B.F.
2002-01-01
One of the most important issues facing a firm involved in direct marketing is the selection of addresses from a mailing list. When the parameters of the model describing consumers' reaction to a mailing are known, addresses for a future mailing can be selected in a profit-maximizing way. Usually,
Maximizing profits associated with abandonment decisions and options
International Nuclear Information System (INIS)
Antia, D.D.J.
1994-01-01
Economic strategies which are designed to maximize profits associated with abandonment decisions and options focus on: extending field life; offsetting of economic risks onto a third party; reuse of facilities and infrastructure; expansion of associated secondary processing and distribution capabilities and usage; and the sale of abandonment units to a third party
Mentoring as Professional Development for Novice Entrepreneurs: Maximizing the Learning
St-Jean, Etienne
2012-01-01
Mentoring can be seen as relevant if not essential in the continuing professional development of entrepreneurs. In the present study, we seek to understand how to maximize the learning that occurs through the mentoring process. To achieve this, we consider various elements that the literature suggested are associated with successful mentoring and…
Maximizing plant density affects broccoli yield and quality
Increased demand for fresh market bunch broccoli (Brassica oleracea L. var. italica) has led to increased production along the United States east coast. Maximizing broccoli yields is a primary concern for quickly expanding southeastern commercial markets. This broccoli plant density study was carr...
Off-shell representations of maximally-extended supersymmetry
International Nuclear Information System (INIS)
Cox, P.H.
1985-01-01
A general theorem on the necessity of off-shell central charges in representations of maximally-extended supersymmetry (number of spinor charges - 4 x largest spin) is presented. A procedure for building larger and higher-N representations is also explored; a (noninteracting) N=8, maximum spin 2, off-shell representation is achieved. Difficulties in adding interactions for this representation are discussed
Maximizing the model for Discounted Stream of Utility from ...
African Journals Online (AJOL)
Osagiede et al. (2009) considered an analytic model for maximizing discounted stream of utility from consumption when the rate of production is linear. A solution was provided to a level where methods of solving order differential equations will be applied, but they left off there, as a result of the mathematical complexity ...
Stewart's maxims: eight "do's" for successfully communicating silviculture to policymakers
R. E. Stewart
1997-01-01
Technical specialists may experience difficulties in presenting information to non-technical policymakers and having that information used. Eight maxims are discussed that should help the silviculturist successfully provide technical information to non-technical audiences so that it will be considered in the formulation of policy.
The Bianchi classification of maximal D = 8 gauged supergravities
Bergshoeff, Eric; Gran, Ulf; Linares, Román; Nielsen, Mikkel; Ortín, Tomás; Roest, Diederik
2003-01-01
We perform the generalized dimensional reduction of D = 11 supergravity over three-dimensional group manifolds as classified by Bianchi. Thus, we construct 11 different maximal D = 8 gauged supergravities, two of which have an additional parameter. One class of group manifolds (class B) leads to
The Bianchi classification of maximal D=8 gauged supergravities
Bergshoeff, E; Gran, U; Linares, R; Nielsen, M; Ortin, T; Roest, D
2003-01-01
We perform the generalized dimensional reduction of D = 11 supergravity over three-dimensional group manifolds as classified by Bianchi. Thus, we construct 11 different maximal D = 8 gauged supergravities, two of which have an additional parameter. One class of group manifolds (class B) leads to
Hard-type nonlocality proof for two maximally entangled particles
International Nuclear Information System (INIS)
Kalamidas, D.
2005-01-01
Full text: We present, for the first time, a Hardy-type proof of nonlocality for two maximally entangled particles in a four-dimensional total Hilbert space. Furthermore, the violation of local realistic predictions occurs for 25 % of trials, exceeding the 9 % maximum obtained by Hardy for nonmaximally entangled states. (author)
Maximal Sharing in the Lambda Calculus with Letrec
Grabmayer, C.A.; Rochel, J.
2014-01-01
Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion
Maximal near-field radiative heat transfer between two plates
Nefzaoui, Elyes; Ezzahri, Younès; Drévillon, Jérémie; Joulain, Karl
2013-09-01
Near-field radiative transfer is a promising way to significantly and simultaneously enhance both thermo-photovoltaic (TPV) devices power densities and efficiencies. A parametric study of Drude and Lorentz models performances in maximizing near-field radiative heat transfer between two semi-infinite planes separated by nanometric distances at room temperature is presented in this paper. Optimal parameters of these models that provide optical properties maximizing the radiative heat flux are reported and compared to real materials usually considered in similar studies, silicon carbide and heavily doped silicon in this case. Results are obtained by exact and approximate (in the extreme near-field regime and the electrostatic limit hypothesis) calculations. The two methods are compared in terms of accuracy and CPU resources consumption. Their differences are explained according to a mesoscopic description of nearfield radiative heat transfer. Finally, the frequently assumed hypothesis which states a maximal radiative heat transfer when the two semi-infinite planes are of identical materials is numerically confirmed. Its subsequent practical constraints are then discussed. Presented results enlighten relevant paths to follow in order to choose or design materials maximizing nano-TPV devices performances.
Impact of training status on maximal oxygen uptake criteria ...
African Journals Online (AJOL)
Peak treadmill running speed was significantly faster and total test time significantly longer in the trained group. In contrast, peak lactate, although maximal for both groups, was significantly higher in the untrained group (13.5 mmol.l-1 compared with 10.3 mmol.l-1). The other responses were not different between the groups ...
Bicarbonate attenuates arterial desaturation during maximal exercise in humans
DEFF Research Database (Denmark)
Nielsen, Henning B; Bredmose, Per P; Strømstad, Morten
2002-01-01
The contribution of pH to exercise-induced arterial O2 desaturation was evaluated by intravenous infusion of sodium bicarbonate (Bic, 1 M; 200-350 ml) or an equal volume of saline (Sal; 1 M) at a constant infusion rate during a "2,000-m" maximal ergometer row in five male oarsmen. Blood...
Nuclear energy and independence
International Nuclear Information System (INIS)
Rotblat, J.
1978-01-01
The pro-nuclear lobby in the United Kingdom won its battle. The Report on the Windscale Inquiry strongly endorsed the application by British Nuclear Fuels (a company owned by the government) to set up a plant to reprocess spent oxide fuels from thermal reactors; a motion in Parliament to postpone a decision was heavily defeated. The Windscale Inquiry was an attempt to settle in a civilized manner what has been tried in other countries by demonstrations and violence. In this exercise, a High Court Judge was given the task of assessing an enormous mass of highly complex technical and medical material, as well as economic, social, and political arguments. The outcome is bitterly disappointing to the objectors, all of whose arguments were rejected. Although the question of whether Britain should embark on a fast breeder reactor program was specifically excluded from the Inquiry, it clearly had a bearing on it. A decision not to proceed with the reprocessing plant would have made a fast breeder program impossible; indeed, the Report argues that such a decision would involve throwing away large indigenous energy resources, a manifest advocacy of the fast breeder. Other arguments for the decision to go ahead with the reprocessing plant included the need to keep the nuclear industry alive, and the profit which Britain will make in processing fuels from other countries, particularly Japan. The author comments further on present UK policy, taking a dissenting view, and then comments on the paper, Nuclear Energy and the Freedom of the West, by A.D. Sakharov
Maximal isometric strength of the cervical musculature in 100 healthy volunteers
DEFF Research Database (Denmark)
Jordan, A; Mehlsen, J; Bülow, P M
1999-01-01
A descriptive study involving maximal isometric strength measurements of the cervical musculature.......A descriptive study involving maximal isometric strength measurements of the cervical musculature....
Martorelli, André; Bottaro, Martim; Vieira, Amilton; Rocha-Júnior, Valdinar; Cadore, Eduardo; Prestes, Jonato; Wagner, Dale; Martorelli, Saulo
2015-06-01
Studies investigating the effect of rest interval length (RI) between sets on neuromuscular performance and metabolic response during power training are scarce. Therefore, the purpose of this study was to compare maximal power output, muscular activity and blood lactate concentration following 1, 2 or 3 minutes RI between sets during a squat power training protocol. Twelve resistance-trained men (22.7 ± 3.2 years; 1.79 ± 0.08 cm; 81.8 ± 11.3 kg) performed 6 sets of 6 repetitions of squat exercise at 60% of their 1 repetition maximum. Peak and average power were obtained for each repetition and set using a linear position transducer. Muscular activity and blood lactate were measured pre and post-exercise session. There was no significant difference between RI on peak power and average power. However, peak power decreased 5.6%, 1.9%, and 5.9% after 6 sets using 1, 2 and 3 minutes of RI, respectively. Average power also decreased 10.5% (1 min), 2.6% (2 min), and 4.3% (3 min) after 6 sets. Blood lactate increased similarly during the three training sessions (1-min: 5.5 mMol, 2-min: 4.3 mMol, and 3-min: 4.0 mMol) and no significant changes were observed in the muscle activity after multiple sets, independent of RI length (pooled ES for 1-min: 0.47, 2-min: 0.65, and 3-min: 1.39). From a practical point of view, the results suggest that 1 to 2 minute of RI between sets during squat exercise may be sufficient to recover power output in a designed power training protocol. However, if training duration is malleable, we recommend 2 min of RI for optimal recovery and power output maintenance during the subsequent exercise sets. Key pointsThis study demonstrates that 1 minute of RI between sets is sufficient to maintain maximal power output during multiple sets of a power-based exercise when it is composed of few repetitions and the sets are not performed until failure. Therefore, a short RI should be considered when designing training programs for the development of
International Nuclear Information System (INIS)
Li, Sucheng; Anwar, Shahzad; Lu, Weixin; Hang, Zhi Hong; Hou, Bo; Shen, Mingrong; Wang, Chin-Hua
2014-01-01
We study the absorption properties of ultrathin conductive films in the microwave regime, and find a moderate absorption effect which gives rise to maximal absorbance 50% if the sheet (square) resistance of the film meets an impedance matching condition. The maximal absorption exhibits a frequency-independent feature and takes place on an extremely subwavelength scale, the film thickness. As a realistic instance, ∼5 nm thick Au film is predicted to achieve the optimal absorption. In addition, a methodology based on metallic mesh structure is proposed to design the frequency-independent ultrathin absorbers. We perform a design of such absorbers with 50% absorption, which is verified by numerical simulations
Future independent power generation and implications for instruments and controls
International Nuclear Information System (INIS)
Williams, J.H.
1991-01-01
This paper reports that the independent power producers market is comprised of cogeneration, small power generation, and independent power production (IPP) segments. Shortfalls in future electric supply are expected to lead to significant growth in this market. The opportunities for instruments and controls will shift from traditional electric utility applications to the independent power market with a more diverse set of needs. Importance will be placed on system reliability, quality of power and increased demand for clean kWh
Dual Competing Photovoltaic Supply Chains: A Social Welfare Maximization Perspective
Directory of Open Access Journals (Sweden)
Zhisong Chen
2017-11-01
Full Text Available In the past decades, the inappropriate subsidy policies in many nations have caused problems such as serious oversupply, fierce competition and subpar social welfare in the photovoltaic (PV industry in many nations. There is a clear shortage in the PV industry literature regarding how dual supply chains compete and the key decision issues regarding the competition between dual PV supply chains. It is critical to develop effective subsidy policies for the competing PV supply chains to achieve social welfare maximization. This study has explored the dual PV supply chain competition under the Bertrand competition assumption by three game-theoretical modeling scenarios (or supply chain strategies considering either the public subsidy or no subsidy from a social welfare maximization perspective. A numerical analysis complemented by two sensitivity analyses provides a better understanding of the pricing and quantity decision dynamics in the dual supply chains under three different supply chain strategies and the corresponding outcomes regarding the total supply chain profits, the social welfare and the required total subsidies. The key findings disclose that if there are public subsidies, the dual PV supply chains have the strongest intention to pursue the decentralized strategy to achieve their maximal returns rather than the centralized strategy that would achieve the maximal social welfare; however, the government would need to pay for the maximal subsidy budget. Thus, the best option for the government would be to encourage the dual PV supply chains to adopt a centralized strategy since this will not only maximize the social welfare but also, at the same time, minimize the public subsidy. With a smart subsidy policy, the PV industry can make the best use of the subsidy budget and grow in a sustainable way to support the highly demanded solar power generation in many countries trying very hard to increase the proportion of their clean energy to
Gamma loop contributing to maximal voluntary contractions in man.
Hagbarth, K E; Kunesch, E J; Nordin, M; Schmidt, R; Wallin, E U
1986-01-01
A local anaesthetic drug was injected around the peroneal nerve in healthy subjects in order to investigate whether the resulting loss in foot dorsiflexion power in part depended on a gamma-fibre block preventing 'internal' activation of spindle end-organs and thereby depriving the alpha-motoneurones of an excitatory spindle inflow during contraction. The motor outcome of maximal dorsiflexion efforts was assessed by measuring firing rates of individual motor units in the anterior tibial (t.a.) muscle, mean voltage e.m.g. from the pretibial muscles, dorsiflexion force and range of voluntary foot dorsiflexion movements. The tests were performed with and without peripheral conditioning stimuli, such as agonist or antagonist muscle vibration or imposed stretch of the contracting muscles. As compared to control values of t.a. motor unit firing rates in maximal isometric voluntary contractions, the firing rates were lower and more irregular during maximal dorsiflexion efforts performed during subtotal peroneal nerve blocks. During the development of paresis a gradual reduction of motor unit firing rates was observed before the units ceased responding to the voluntary commands. This change in motor unit behaviour was accompanied by a reduction of the mean voltage e.m.g. activity in the pretibial muscles. At a given stage of anaesthesia the e.m.g. responses to maximal voluntary efforts were more affected than the responses evoked by electric nerve stimuli delivered proximal to the block, indicating that impaired impulse transmission in alpha motor fibres was not the sole cause of the paresis. The inability to generate high and regular motor unit firing rates during peroneal nerve blocks was accentuated by vibration applied over the antagonistic calf muscles. By contrast, in eight out of ten experiments agonist stretch or vibration caused an enhancement of motor unit firing during the maximal force tasks. The reverse effects of agonist and antagonist vibration on the
Dual Competing Photovoltaic Supply Chains: A Social Welfare Maximization Perspective
Su, Shong-Iee Ivan
2017-01-01
In the past decades, the inappropriate subsidy policies in many nations have caused problems such as serious oversupply, fierce competition and subpar social welfare in the photovoltaic (PV) industry in many nations. There is a clear shortage in the PV industry literature regarding how dual supply chains compete and the key decision issues regarding the competition between dual PV supply chains. It is critical to develop effective subsidy policies for the competing PV supply chains to achieve social welfare maximization. This study has explored the dual PV supply chain competition under the Bertrand competition assumption by three game-theoretical modeling scenarios (or supply chain strategies) considering either the public subsidy or no subsidy from a social welfare maximization perspective. A numerical analysis complemented by two sensitivity analyses provides a better understanding of the pricing and quantity decision dynamics in the dual supply chains under three different supply chain strategies and the corresponding outcomes regarding the total supply chain profits, the social welfare and the required total subsidies. The key findings disclose that if there are public subsidies, the dual PV supply chains have the strongest intention to pursue the decentralized strategy to achieve their maximal returns rather than the centralized strategy that would achieve the maximal social welfare; however, the government would need to pay for the maximal subsidy budget. Thus, the best option for the government would be to encourage the dual PV supply chains to adopt a centralized strategy since this will not only maximize the social welfare but also, at the same time, minimize the public subsidy. With a smart subsidy policy, the PV industry can make the best use of the subsidy budget and grow in a sustainable way to support the highly demanded solar power generation in many countries trying very hard to increase the proportion of their clean energy to combat the global
Dual Competing Photovoltaic Supply Chains: A Social Welfare Maximization Perspective.
Chen, Zhisong; Su, Shong-Iee Ivan
2017-11-20
In the past decades, the inappropriate subsidy policies in many nations have caused problems such as serious oversupply, fierce competition and subpar social welfare in the photovoltaic (PV) industry in many nations. There is a clear shortage in the PV industry literature regarding how dual supply chains compete and the key decision issues regarding the competition between dual PV supply chains. It is critical to develop effective subsidy policies for the competing PV supply chains to achieve social welfare maximization. This study has explored the dual PV supply chain competition under the Bertrand competition assumption by three game-theoretical modeling scenarios (or supply chain strategies) considering either the public subsidy or no subsidy from a social welfare maximization perspective. A numerical analysis complemented by two sensitivity analyses provides a better understanding of the pricing and quantity decision dynamics in the dual supply chains under three different supply chain strategies and the corresponding outcomes regarding the total supply chain profits, the social welfare and the required total subsidies. The key findings disclose that if there are public subsidies, the dual PV supply chains have the strongest intention to pursue the decentralized strategy to achieve their maximal returns rather than the centralized strategy that would achieve the maximal social welfare; however, the government would need to pay for the maximal subsidy budget. Thus, the best option for the government would be to encourage the dual PV supply chains to adopt a centralized strategy since this will not only maximize the social welfare but also, at the same time, minimize the public subsidy. With a smart subsidy policy, the PV industry can make the best use of the subsidy budget and grow in a sustainable way to support the highly demanded solar power generation in many countries trying very hard to increase the proportion of their clean energy to combat the global
Learning to maximize reward rate: a model based on semi-Markov decision processes.
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R
2014-01-01
WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.
Boards: Independent and Committed Directors?
Christophe Volonté
2011-01-01
Regulators, proxy advisors and shareholders are regularly calling for independent directors. However, at the same time, independent directors commonly engage in numerous outside activities potentially reducing their time and commitment with the particular firm. Using Tobin's Q as an approximation of market valuation and controlling for endogeneity, our empirical analysis reveals that neither is independence positively related to firm performance nor are outside activities negatively related t...
An inverse-source problem for maximization of pore-fluid oscillation within poroelastic formations
Jeong, C.; Kallivokas, L. F.
2016-01-01
This paper discusses a mathematical and numerical modeling approach for identification of an unknown optimal loading time signal of a wave source, atop the ground surface, that can maximize the relative wave motion of a single-phase pore fluid within fluid-saturated porous permeable (poroelastic) rock formations, surrounded by non-permeable semi-infinite elastic solid rock formations, in a one-dimensional setting. The motivation stems from a set of field observations, following seismic events and vibrational tests, suggesting that shaking an oil reservoir is likely to improve oil production rates. This maximization problem is cast into an inverse-source problem, seeking an optimal loading signal that minimizes an objective functional – the reciprocal of kinetic energy in terms of relative pore-fluid wave motion within target poroelastic layers. We use the finite element method to obtain the solution of the governing wave physics of a multi-layered system, where the wave equations for the target poroelastic layers and the elastic wave equation for the surrounding non-permeable layers are coupled with each other. We use a partial-differential-equation-constrained-optimization framework (a state-adjoint-control problem approach) to tackle the minimization problem. The numerical results show that the numerical optimizer recovers optimal loading signals, whose dominant frequencies correspond to amplification frequencies, which can also be obtained by a frequency sweep, leading to larger amplitudes of relative pore-fluid wave motion within the target hydrocarbon formation than other signals.
An inverse-source problem for maximization of pore-fluid oscillation within poroelastic formations
Jeong, C.
2016-07-04
This paper discusses a mathematical and numerical modeling approach for identification of an unknown optimal loading time signal of a wave source, atop the ground surface, that can maximize the relative wave motion of a single-phase pore fluid within fluid-saturated porous permeable (poroelastic) rock formations, surrounded by non-permeable semi-infinite elastic solid rock formations, in a one-dimensional setting. The motivation stems from a set of field observations, following seismic events and vibrational tests, suggesting that shaking an oil reservoir is likely to improve oil production rates. This maximization problem is cast into an inverse-source problem, seeking an optimal loading signal that minimizes an objective functional – the reciprocal of kinetic energy in terms of relative pore-fluid wave motion within target poroelastic layers. We use the finite element method to obtain the solution of the governing wave physics of a multi-layered system, where the wave equations for the target poroelastic layers and the elastic wave equation for the surrounding non-permeable layers are coupled with each other. We use a partial-differential-equation-constrained-optimization framework (a state-adjoint-control problem approach) to tackle the minimization problem. The numerical results show that the numerical optimizer recovers optimal loading signals, whose dominant frequencies correspond to amplification frequencies, which can also be obtained by a frequency sweep, leading to larger amplitudes of relative pore-fluid wave motion within the target hydrocarbon formation than other signals.
Jung, Halim; Jung, Sangwoo; Joo, Sunghee; Song, Changho
2016-01-01
[Purpose] The purpose of this study was to compare changes in the mobility of the pelvic floor muscle during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. [Subjects] Thirty healthy adults participated in this study (15 men and 15 women). [Methods] All participants performed a bridge exercise and abdominal curl-up during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. Pelvic floor mobility...
Magellan Project: Evolving enhanced operations efficiency to maximize science value
Cheuvront, Allan R.; Neuman, James C.; Mckinney, J. Franklin
1994-01-01
Magellan has been one of NASA's most successful spacecraft, returning more science data than all planetary spacecraft combined. The Magellan Spacecraft Team (SCT) has maximized the science return with innovative operational techniques to overcome anomalies and to perform activities for which the spacecraft was not designed. Commanding the spacecraft was originally time consuming because the standard development process was envisioned as manual tasks. The Program understood that reducing mission operations costs were essential for an extended mission. Management created an environment which encouraged automation of routine tasks, allowing staff reduction while maximizing the science data returned. Data analysis and trending, command preparation, and command reviews are some of the tasks that were automated. The SCT has accommodated personnel reductions by improving operations efficiency while returning the maximum science data possible.
High Intensity Interval Training for Maximizing Health Outcomes.
Karlsen, Trine; Aamot, Inger-Lise; Haykowsky, Mark; Rognmo, Øivind
Regular physical activity and exercise training are important actions to improve cardiorespiratory fitness and maintain health throughout life. There is solid evidence that exercise is an effective preventative strategy against at least 25 medical conditions, including cardiovascular disease, stroke, hypertension, colon and breast cancer, and type 2 diabetes. Traditionally, endurance exercise training (ET) to improve health related outcomes has consisted of low- to moderate ET intensity. However, a growing body of evidence suggests that higher exercise intensities may be superior to moderate intensity for maximizing health outcomes. The primary objective of this review is to discuss how aerobic high-intensity interval training (HIIT) as compared to moderate continuous training may maximize outcomes, and to provide practical advices for successful clinical and home-based HIIT. Copyright © 2017. Published by Elsevier Inc.
Self-guided method to search maximal Bell violations for unknown quantum states
Yang, Li-Kai; Chen, Geng; Zhang, Wen-Hao; Peng, Xing-Xiang; Yu, Shang; Ye, Xiang-Jun; Li, Chuan-Feng; Guo, Guang-Can
2017-11-01
In recent decades, a great variety of research and applications concerning Bell nonlocality have been developed with the advent of quantum information science. Providing that Bell nonlocality can be revealed by the violation of a family of Bell inequalities, finding maximal Bell violation (MBV) for unknown quantum states becomes an important and inevitable task during Bell experiments. In this paper we introduce a self-guided method to find MBVs for unknown states using a stochastic gradient ascent algorithm (SGA), by parametrizing the corresponding Bell operators. For three investigated systems (two qubit, three qubit, and two qutrit), this method can ascertain the MBV of general two-setting inequalities within 100 iterations. Furthermore, we prove SGA is also feasible when facing more complex Bell scenarios, e.g., d -setting d -outcome Bell inequality. Moreover, compared to other possible methods, SGA exhibits significant superiority in efficiency, robustness, and versatility.
On the maximal cut of Feynman integrals and the solution of their differential equations
Directory of Open Access Journals (Sweden)
Amedeo Primo
2017-03-01
Full Text Available The standard procedure for computing scalar multi-loop Feynman integrals consists in reducing them to a basis of so-called master integrals, derive differential equations in the external invariants satisfied by the latter and, finally, try to solve them as a Laurent series in ϵ=(4−d/2, where d are the space–time dimensions. The differential equations are, in general, coupled and can be solved using Euler's variation of constants, provided that a set of homogeneous solutions is known. Given an arbitrary differential equation of order higher than one, there exists no general method for finding its homogeneous solutions. In this paper we show that the maximal cut of the integrals under consideration provides one set of homogeneous solutions, simplifying substantially the solution of the differential equations.
An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains
Directory of Open Access Journals (Sweden)
Qihong Duan
2010-01-01
Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.
Maximization of energy in the output of a linear system
International Nuclear Information System (INIS)
Dudley, D.G.
1976-01-01
A time-limited signal which, when passed through a linear system, maximizes the total output energy is considered. Previous work has shown that the solution is given by the eigenfunction associated with the maximum eigenvalue in a Hilbert-Schmidt integral equation. Analytical results are available for the case where the transfer function is a low-pass filter. This work is extended by obtaining a numerical solution to the integral equation which allows results for reasonably general transfer functions
Speeding Up Maximal Causality Reduction with Static Dependency Analysis
Huang, Shiyou; Huang, Jeff
2017-01-01
Stateless Model Checking (SMC) offers a powerful approach to verifying multithreaded programs but suffers from the state-space explosion problem caused by the huge thread interleaving space. The pioneering reduction technique Partial Order Reduction (POR) mitigates this problem by pruning equivalent interleavings from the state space. However, limited by the happens-before relation, POR still explores redundant executions. The recent advance, Maximal Causality Reduction (MCR), shows a promisi...
Planning for partnerships: Maximizing surge capacity resources through service learning.
Adams, Lavonne M; Reams, Paula K; Canclini, Sharon B
2015-01-01
Infectious disease outbreaks and natural or human-caused disasters can strain the community's surge capacity through sudden demand on healthcare activities. Collaborative partnerships between communities and schools of nursing have the potential to maximize resource availability to meet community needs following a disaster. This article explores how communities can work with schools of nursing to enhance surge capacity through systems thinking, integrated planning, and cooperative efforts.
Applications of expectation maximization algorithm for coherent optical communication
DEFF Research Database (Denmark)
Carvalho, L.; Oliveira, J.; Zibar, Darko
2014-01-01
In this invited paper, we present powerful statistical signal processing methods, used by machine learning community, and link them to current problems in optical communication. In particular, we will look into iterative maximum likelihood parameter estimation based on expectation maximization...... algorithm and its application in coherent optical communication systems for linear and nonlinear impairment mitigation. Furthermore, the estimated parameters are used to build the probabilistic model of the system for the synthetic impairment generation....
LOAD THAT MAXIMIZES POWER OUTPUT IN COUNTERMOVEMENT JUMP
Directory of Open Access Journals (Sweden)
Pedro Jimenez-Reyes
2016-02-01
Full Text Available ABSTRACT Introduction: One of the main problems faced by strength and conditioning coaches is the issue of how to objectively quantify and monitor the actual training load undertaken by athletes in order to maximize performance. It is well known that performance of explosive sports activities is largely determined by mechanical power. Objective: This study analysed the height at which maximal power output is generated and the corresponding load with which is achieved in a group of male-trained track and field athletes in the test of countermovement jump (CMJ with extra loads (CMJEL. Methods: Fifty national level male athletes in sprinting and jumping performed a CMJ test with increasing loads up to a height of 16 cm. The relative load that maximized the mechanical power output (Pmax was determined using a force platform and lineal encoder synchronization and estimating the power by peak power, average power and flight time in CMJ. Results: The load at which the power output no longer existed was at a height of 19.9 ± 2.35, referring to a 99.1 ± 1% of the maximum power output. The load that maximizes power output in all cases has been the load with which an athlete jump a height of approximately 20 cm. Conclusion: These results highlight the importance of considering the height achieved in CMJ with extra load instead of power because maximum power is always attained with the same height. We advise for the preferential use of the height achieved in CMJEL test, since it seems to be a valid indicative of an individual's actual neuromuscular potential providing a valid information for coaches and trainers when assessing the performance status of our athletes and to quantify and monitor training loads, measuring only the height of the jump in the exercise of CMJEL.
Finite translation surfaces with maximal number of translations
Schlage-Puchta, Jan-Christoph; Weitze-Schmithuesen, Gabriela
2013-01-01
The natural automorphism group of a translation surface is its group of translations. For finite translation surfaces of genus g > 1 the order of this group is naturally bounded in terms of g due to a Riemann-Hurwitz formula argument. In analogy with classical Hurwitz surfaces, we call surfaces which achieve the maximal bound Hurwitz translation surfaces. We study for which g there exist Hurwitz translation surfaces of genus g.
Crystallographic cut that maximizes of the birefringence in photorefractive crystals
Rueda-Parada, Jorge Enrique
2017-01-01
The electro-optical birefringence effect depends on the crystal type, cut crystal, applied electric field and the incidence direction of light on the principal crystal faces. It is presented a study of maximizing the birefringence in photorefractive crystals of cubic crystallographic symmetry, in terms of these three parameters. General analytical expressions for the birefringence were obtained, from which birefringence can be established for any type of cut. A new crystallographic cut was en...
On Throughput Maximization in Constant Travel-Time Robotic Cells
Milind Dawande; Chelliah Sriskandarajah; Suresh Sethi
2002-01-01
We consider the problem of scheduling operations in bufferless robotic cells that produce identical parts. The objective is to find a cyclic sequence of robot moves that minimizes the long-run average time to produce a part or, equivalently, maximizes the throughput rate. The robot can be moved in simple cycles that produce one unit or, in more complicated cycles, that produce multiple units. Because one-unit cycles are the easiest to understand, implement, and control, they are widely used i...
Minimal and Maximal Operator Space Structures on Banach Spaces
P., Vinod Kumar; Balasubramani, M. S.
2014-01-01
Given a Banach space $X$, there are many operator space structures possible on $X$, which all have $X$ as their first matrix level. Blecher and Paulsen identified two extreme operator space structures on $X$, namely $Min(X)$ and $Max(X)$ which represents respectively, the smallest and the largest operator space structures admissible on $X$. In this note, we consider the subspace and the quotient space structure of minimal and maximal operator spaces.
On the maximal dimension of a completely entangled subspace for ...
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
dim S = d1d2 ...dk − (d1 +···+ dk) + k − 1, where E is the collection of all completely entangled subspaces. When H1 = H2 and k = 2 an explicit orthonormal basis of a maximal completely entangled subspace of H1 ⊗ H2 is given. We also introduce a more delicate notion of a perfectly entangled subspace for a multipartite ...
Planning Routes Across Economic Terrains: Maximizing Utility, Following Heuristics
Zhang, Hang; Maddula, Soumya V.; Maloney, Laurence T.
2010-01-01
We designed an economic task to investigate human planning of routes in landscapes where travel in different kinds of terrain incurs different costs. Participants moved their finger across a touch screen from a starting point to a destination. The screen was divided into distinct kinds of terrain and travel within each kind of terrain imposed a cost proportional to distance traveled. We varied costs and spatial configurations of terrains and participants received fixed bonuses minus the total cost of the routes they chose. We first compared performance to a model maximizing gain. All but one of 12 participants failed to adopt least-cost routes and their failure to do so reduced their winnings by about 30% (median value). We tested in detail whether participants’ choices of routes satisfied three necessary conditions (heuristics) for a route to maximize gain. We report failures of one heuristic for 7 out of 12 participants. Last of all, we modeled human performance with the assumption that participants assign subjective utilities to costs and maximize utility. For 7 out 12 participants, the fitted utility function was an accelerating power function of actual cost and for the remaining 5, a decelerating power function. We discuss connections between utility aggregation in route planning and decision under risk. Our task could be adapted to investigate human strategy and optimality of route planning in full-scale landscapes. PMID:21833269
PLANNING ROUTES ACROSS ECONOMIC TERRAINS: MAXIMIZING UTILITY, FOLLOWING HEURISTICS
Directory of Open Access Journals (Sweden)
Hang eZhang
2010-12-01
Full Text Available We designed an economic task to investigate human planning of routes in landscapes where travel in different kinds of terrain incurs different costs. Participants moved their finger across a touch screen from a starting point to a destination. The screen was divided into distinct kinds of terrain and travel within each kind of terrain imposed a cost proportional to distance traveled. We varied costs and spatial configurations of terrains and participants received fixed bonuses minus the total cost of the routes they chose. We first compared performance to a model maximizing gain. All but one of 12 participants failed to adopt least-cost routes and their failure to do so reduced their winnings by about 30% (median value. We tested in detail whether participants’ choices of routes satisfied three necessary conditions (heuristics for a route to maximize gain. We report failures of one heuristic for 7 out of 12 participants. Last of all, we modeled human performance with the assumption that participants assign subjective utilities to costs and maximize utility. For 7 out 12 participants, the fitted utility function was an accelerating power function of actual cost and for the remaining 5, a decelerating power function. We discuss connections between utility aggregation in route planning and decision under risk. Our task could be adapted to investigate human strategy and optimality of route planning in full-scale landscapes.
Maximally efficient protocols for direct secure quantum communication
Energy Technology Data Exchange (ETDEWEB)
Banerjee, Anindita [Department of Physics and Materials Science Engineering, Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India); Department of Physics and Center for Astroparticle Physics and Space Science, Bose Institute, Block EN, Sector V, Kolkata 700091 (India); Pathak, Anirban, E-mail: anirban.pathak@jiit.ac.in [Department of Physics and Materials Science Engineering, Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India); RCPTM, Joint Laboratory of Optics of Palacky University and Institute of Physics of Academy of Science of the Czech Republic, Faculty of Science, Palacky University, 17. Listopadu 12, 77146 Olomouc (Czech Republic)
2012-10-01
Two protocols for deterministic secure quantum communication (DSQC) using GHZ-like states have been proposed. It is shown that one of these protocols is maximally efficient and that can be modified to an equivalent protocol of quantum secure direct communication (QSDC). Security and efficiency of the proposed protocols are analyzed and compared. It is shown that dense coding is sufficient but not essential for DSQC and QSDC protocols. Maximally efficient QSDC protocols are shown to be more efficient than their DSQC counterparts. This additional efficiency arises at the cost of message transmission rate. -- Highlights: ► Two protocols for deterministic secure quantum communication (DSQC) are proposed. ► One of the above protocols is maximally efficient. ► It is modified to an equivalent protocol of quantum secure direct communication (QSDC). ► It is shown that dense coding is sufficient but not essential for DSQC and QSDC protocols. ► Efficient QSDC protocols are always more efficient than their DSQC counterparts.
An efficient community detection algorithm using greedy surprise maximization
International Nuclear Information System (INIS)
Jiang, Yawen; Jia, Caiyan; Yu, Jian
2014-01-01
Community detection is an important and crucial problem in complex network analysis. Although classical modularity function optimization approaches are widely used for identifying communities, the modularity function (Q) suffers from its resolution limit. Recently, the surprise function (S) was experimentally proved to be better than the Q function. However, up until now, there has been no algorithm available to perform searches to directly determine the maximal surprise values. In this paper, considering the superiority of the S function over the Q function, we propose an efficient community detection algorithm called AGSO (algorithm based on greedy surprise optimization) and its improved version FAGSO (fast-AGSO), which are based on greedy surprise optimization and do not suffer from the resolution limit. In addition, (F)AGSO does not need the number of communities K to be specified in advance. Tests on experimental networks show that (F)AGSO is able to detect optimal partitions in both simple and even more complex networks. Moreover, algorithms based on surprise maximization perform better than those algorithms based on modularity maximization, including Blondel–Guillaume–Lambiotte–Lefebvre (BGLL), Clauset–Newman–Moore (CNM) and the other state-of-the-art algorithms such as Infomap, order statistics local optimization method (OSLOM) and label propagation algorithm (LPA). (paper)
Formation Control of the MAXIM L2 Libration Orbit Mission
Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda
2004-01-01
The Micro-Arcsecond X-ray Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imager mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 micro-arcsecond imaging. Currently the mission architecture comprises 25 spacecraft, 24 as optics modules and one as the detector, which will form sparse sub-apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. To achieve these mission goals, the formation is required to cooperatively point at desired targets. Once pointed, the individual elements of the MAXIM formation must remain stable, maintaining their relative positions and attitudes below a critical threshold. These pointing and formation stability requirements impact the control and design of the formation. In this paper, we provide analysis of control efforts that are dependent upon the stability and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions to minimize the control efforts and we address continuous control via input feedback linearization (IFL). Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.
Cut-off Grade Optimization for Maximizing the Output Rate
Directory of Open Access Journals (Sweden)
A. Khodayari
2012-12-01
Full Text Available In the open-pit mining, one of the first decisions that must be made in production planning stage, after completing the design of final pit limits, is determining of the processing plant cut-off grade. Since this grade has an essential effect on operations, choosing the optimum cut-off grade is of considerable importance. Different goals may be used for determining optimum cut-off grade. One of these goals may be maximizing the output rate (amount of product per year, which is very important, especially from marketing and market share points of view. Objective of this research is determining the optimum cut-off grade of processing plant in order to maximize output rate. For performing this optimization, an Operations Research (OR model has been developed. The object function of this model is output rate that must be maximized. This model has two operational constraints namely mining and processing restrictions. For solving the model a heuristic method has been developed. Results of research show that the optimum cut-off grade for satisfying pre-stated goal is the balancing grade of mining and processing operations, and maximum production rate is a function of the maximum capacity of processing plant and average grade of ore that according to the above optimum cut-off grade must be sent to the plant.
Effect of sonic driving on maximal aerobic performance.
Brilla, L.R.; Hatcher, Stefanie
2000-07-01
The study purpose was to evaluate antecedent binaural stimulation (ABS) on maximal aerobic physical performance. Twenty-two healthy, physically active subjects, 21-34 years, randomly received one of two preparations for each session: 15 min of quiet (BLANK) or percussive sonic driving at 200+ beats per minute (bpm) using a recorded compact disc (FSS, Mill Valley, CA) with headphones (ABS). Baseline HR, blood pressure (BP), and breathing frequency (f(br)) were obtained. During each condition, HR and f(br) were recorded at 3-min intervals. The graded maximal treadmill testing was administered immediately postpreparation session on separate days, with at least 48 h rest between sessions. There were significant differences in the antecedent period means between the two conditions, ABS (HR: 70.2 +/- 10.7 bpm; f(br): 18.5 +/- 3.3 br min(-1); BP: 134.5/87.9 +/- 13.6/9.2 mm Hg) and BLANK (HR: 64.6 +/- 7.9; f(br): 14.3 +/- 2.9; BP: 126.7/80.3 +/- 12.1/8.6). Differences were noted for each 3-min interval and pre- postantecedent period. The maximal graded exercise test (GXT) results showed that there was a small but significant (P 0.05). There may be a latency to ABS related to entrainment or imagery-enhanced warm-up. Am. J. Hum. Biol. 12:558-565, 2000. Copyright 2000 Wiley-Liss, Inc.
Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing
International Nuclear Information System (INIS)
King, Stephen F.; Zhang, Jue; Zhou, Shun
2016-01-01
The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ_2_3=45"∘±1"∘, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.
Off-diagonal mass generation for Yang-Mills theories in the maximal Abelian gauge
International Nuclear Information System (INIS)
Dudal, D.; Verschelde, H.; Sarandy, M.S.
2007-01-01
We investigate a dynamical mass generation mechanism for the off-diagonal gluons and ghosts in SU(N) Yang-Mills theories, quantized in the maximal Abelian gauge. Such a mass can be seen as evidence for the Abelian dominance in that gauge. It originates from the condensation of a mixed gluon-ghost operator of mass dimension two, which lowers the vacuum energy. We construct an effective potential for this operator by a combined use of the local composite operators technique with algebraic renormalization and we discuss the gauge parameter independence of the results. We also show that it is possible to connect the vacuum energy, due to the mass dimension two condensate discussed here, with the non-trivial vacuum energy originating from the condensate 2 μ >, which has attracted much attention in the Landau gauge. (author)
Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems
Directory of Open Access Journals (Sweden)
Weeraddana Chathuranga
2010-01-01
Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.
Ghassemi Tari, Farhad; Neghabi, Hossein
2018-03-01
An effective facility layout implies that departments with high flow are laid adjacent. However, in the case of a very narrow boundary length between the neighbouring departments, the adjacency would actually be useless. In traditional layout design methods, a score is generally assigned independent of the department's boundary length. This may result in a layout design with a restricted material flow. This article proposes a new concept of adjacency in which the department pairs are laid adjacent with a wider path. To apply this concept, a shop with unequal rectangular departments is contemplated and a mathematical programming model with the objective of maximizing the sum of the adjacency degrees is proposed. A computational experiment is conducted to demonstrate the efficiency of the layout design. It is demonstrated that the new concept provides a more efficient and a more realistic layout design.
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing
Energy Technology Data Exchange (ETDEWEB)
King, Stephen F. [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Zhang, Jue [Center for High Energy Physics, Peking University,Beijing 100871 (China); Zhou, Shun [Center for High Energy Physics, Peking University,Beijing 100871 (China); Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)
2016-12-06
The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ{sub 23}=45{sup ∘}±1{sup ∘}, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.
On independence in risk communication
International Nuclear Information System (INIS)
Lacronique, J. F.
2006-01-01
The term 'independence' is a common key word used by almost all stake holders in the field of nuclear safety regulation. The intention is to persuade the public that it can have more confidence and trust in the persons in charge, if their competence and judgment cannot be altered by any kind of political issue or personal interest. However, it is possible to discuss the reality of this claimed quality: how is it possible to verify that the organization that claim 'independence' really respect it? National expertise Institutions can show that they are independent from the industry, but can they claim total independence from the government? NGO have build a large part of their constituency on 'independence' from industry and governments, but are they independent from the ideological forces -sometimes very powerful - that support them? How can we achieve to make this noble word really meaningful? We will show through different examples, that 'independence' is by definition a fragile and versatile challenge, rather than a durable label. It has to be refreshed regularly and thoroughly. Risk communication, in that context, must respect principles which will build independence as a solid asset, and keep a certain distance with mere marketing purposes or candid wishful thinking
Defining and Selecting Independent Directors
Directory of Open Access Journals (Sweden)
Eric Pichet
2017-10-01
Full Text Available Drawing from the Enlightened Shareholder Theory that the author first developed in 2011, this theoretical paper with practical and normative ambitions achieves a better definition of independent director, while improving the understanding of the roles he fulfils on boards of directors. The first part defines constructs like firms, Governance system and Corporate governance, offering a clear distinction between the latter two concepts before explaining the four main missions of a board. The second part defines the ideal independent director by outlining the objective qualities that are necessary and adding those subjective aspects that have turned this into a veritable profession. The third part defines the ideal process for selecting independent directors, based on nominating committees that should themselves be independent. It also includes ways of assessing directors who are currently in function, as well as modalities for renewing their mandates. The paper’s conclusion presents the Paradox of the Independent Director.
Compositional models for credal sets
Czech Academy of Sciences Publication Activity Database
Vejnarová, Jiřina
2017-01-01
Roč. 90, č. 1 (2017), s. 359-373 ISSN 0888-613X R&D Projects: GA ČR(CZ) GA16-12010S Institutional support: RVO:67985556 Keywords : Imprecise probabilities * Credal sets * Multidimensional models * Conditional independence Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 2.845, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/vejnarova-0483288.pdf
Schram, Ben; Hing, Wayne; Climstein, Mike
2016-01-01
Stand-up paddle boarding (SUP) is a rapidly growing sport and recreational activity for which only anecdotal evidence exists on its proposed health, fitness, and injury-rehabilitation benefits. 10 internationally and nationally ranked elite SUP athletes. Participants were assessed for their maximal aerobic power on an ergometer in a laboratory and compared with other water-based athletes. Field-based assessments were subsequently performed using a portable gas-analysis system, and a correlation between the 2 measures was performed. Maximal aerobic power (relative) was significantly higher (P = .037) when measured in the field with a portable gas-analysis system (45.48 ± 6.96 mL · kg(-1) · min(-1)) than with laboratory-based metabolic-cart measurements (43.20 ± 6.67 mL · kg(-1) · min(-1)). There was a strong, positive correlation (r = .907) between laboratory and field maximal aerobic power results. Significantly higher (P = .000) measures of SUP paddling speed were found in the field than with the laboratory ergometer (+42.39%). There were no significant differences in maximal heart rate between the laboratory and field settings (P = .576). The results demonstrate the maximal aerobic power representative of internationally and nationally ranked SUP athletes and show that SUP athletes can be assessed for maximal aerobic power in the laboratory with high correlation to field-based measures. The field-based portable gas-analysis unit has a tendency to consistently measure higher oxygen consumption. Elite SUP athletes display aerobic power outputs similar to those of other upper-limb-dominant elite water-based athletes (surfing, dragon-boat racing, and canoeing).