WorldWideScience

Sample records for maximum independent set

  1. Distributed Large Independent Sets in One Round On Bounded-independence Graphs

    OpenAIRE

    Halldorsson , Magnus M.; Konrad , Christian

    2015-01-01

    International audience; We present a randomized one-round, single-bit messages, distributed algorithm for the maximum independent set problem in polynomially bounded-independence graphs with poly-logarithmic approximation factor. Bounded-independence graphs capture various models of wireless networks such as the unit disc graphs model and the quasi unit disc graphs model. For instance, on unit disc graphs, our achieved approximation ratio is O((log(n)/log(log(n)))^2).A starting point of our w...

  2. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  3. Halo-independence with quantified maximum entropy at DAMA/LIBRA

    Energy Technology Data Exchange (ETDEWEB)

    Fowlie, Andrew, E-mail: andrew.j.fowlie@googlemail.com [ARC Centre of Excellence for Particle Physics at the Tera-scale, Monash University, Melbourne, Victoria 3800 (Australia)

    2017-10-01

    Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.

  4. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  5. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  6. Reference values of maximum walking speed among independent community-dwelling Danish adults aged 60 to 79 years

    DEFF Research Database (Denmark)

    Tibaek, S; Holmestad-Bechmann, N; Pedersen, Trine B

    2015-01-01

    OBJECTIVES: To establish reference values for maximum walking speed over 10m for independent community-dwelling Danish adults, aged 60 to 79 years, and to evaluate the effects of gender and age. DESIGN: Cross-sectional study. SETTING: Danish companies and senior citizens clubs. PARTICIPANTS: Two ...

  7. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence.

    Science.gov (United States)

    Li, Sui-Xian

    2018-05-07

    Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ₂ norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  8. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence

    Directory of Open Access Journals (Sweden)

    Sui-Xian Li

    2018-05-01

    Full Text Available Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI. However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ2 norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  9. Tutte sets in graphs II: The complexity of finding maximum Tutte sets

    NARCIS (Netherlands)

    Bauer, D.; Broersma, Haitze J.; Kahl, N.; Morgana, A.; Schmeichel, E.; Surowiec, T.

    2007-01-01

    A well-known formula of Tutte and Berge expresses the size of a maximum matching in a graph $G$ in terms of what is usually called the deficiency. A subset $X$ of $V(G)$ for which this deficiency is attained is called a Tutte set of $G$. While much is known about maximum matchings, less is known

  10. Reconfiguring Independent Sets in Claw-Free Graphs

    NARCIS (Netherlands)

    Bonsma, P.S.; Kamiński, Marcin; Wrochna, Marcin; Ravi, R.; Gørtz, Inge Li

    We present a polynomial-time algorithm that, given two independent sets in a claw-free graph G, decides whether one can be transformed into the other by a sequence of elementary steps. Each elementary step is to remove a vertex v from the current independent set S and to add a new vertex w (not in

  11. The number of independent sets in unicyclic graphs

    DEFF Research Database (Denmark)

    Pedersen, Anders Sune; Vestergaard, Preben Dahl

      In this paper, we determine upper and lower bounds for the number of independent sets in a unicyclic graph in terms of its order. This gives an upper bound for the number of independent sets in a connected graph which contains at least one cycle. We also determine the upper bound for the number...

  12. Handelman's hierarchy for the maximum stable set problem

    NARCIS (Netherlands)

    Laurent, M.; Sun, Z.

    2014-01-01

    The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a

  13. Setting the renormalization scale in QCD: The principle of maximum conformality

    DEFF Research Database (Denmark)

    Brodsky, S. J.; Di Giustino, L.

    2012-01-01

    A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when the renormali......A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when...... the renormalization scale is set properly, all nonconformal beta not equal 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with beta...... = 0. The resulting scale-fixed predictions using the principle of maximum conformality (PMC) are independent of the choice of renormalization scheme-a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale setting in the Abelian limit...

  14. Outer-2-independent domination in graphs

    Indian Academy of Sciences (India)

    independent dominating set of a graph is a set of vertices of such that every vertex of ()\\ has a neighbor in and the maximum vertex degree of the subgraph induced by ()\\ is at most one. The outer-2-independent domination ...

  15. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  16. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    Science.gov (United States)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  17. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  18. An electromagnetism-like method for the maximum set splitting problem

    Directory of Open Access Journals (Sweden)

    Kratica Jozef

    2013-01-01

    Full Text Available In this paper, an electromagnetism-like approach (EM for solving the maximum set splitting problem (MSSP is applied. Hybrid approach consisting of the movement based on the attraction-repulsion mechanisms combined with the proposed scaling technique directs EM to promising search regions. Fast implementation of the local search procedure additionally improves the efficiency of overall EM system. The performance of the proposed EM approach is evaluated on two classes of instances from the literature: minimum hitting set and Steiner triple systems. The results show, except in one case, that EM reaches optimal solutions up to 500 elements and 50000 subsets on minimum hitting set instances. It also reaches all optimal/best-known solutions for Steiner triple systems.

  19. SuperTRI: A new approach based on branch support analyses of multiple independent data sets for assessing reliability of phylogenetic inferences.

    Science.gov (United States)

    Ropiquet, Anne; Li, Blaise; Hassanin, Alexandre

    2009-09-01

    Supermatrix and supertree are two methods for constructing a phylogenetic tree by using multiple data sets. However, these methods are not a panacea, as conflicting signals between data sets can lead to misinterpret the evolutionary history of taxa. In particular, the supermatrix approach is expected to be misleading if the species-tree signal is not dominant after the combination of the data sets. Moreover, most current supertree methods suffer from two limitations: (i) they ignore or misinterpret secondary (non-dominant) phylogenetic signals of the different data sets; and (ii) the logical basis of node robustness measures is unclear. To overcome these limitations, we propose a new approach, called SuperTRI, which is based on the branch support analyses of the independent data sets, and where the reliability of the nodes is assessed using three measures: the supertree Bootstrap percentage and two other values calculated from the separate analyses: the mean branch support (mean Bootstrap percentage or mean posterior probability) and the reproducibility index. The SuperTRI approach is tested on a data matrix including seven genes for 82 taxa of the family Bovidae (Mammalia, Ruminantia), and the results are compared to those found with the supermatrix approach. The phylogenetic analyses of the supermatrix and independent data sets were done using four methods of tree reconstruction: Bayesian inference, maximum likelihood, and unweighted and weighted maximum parsimony. The results indicate, firstly, that the SuperTRI approach shows less sensitivity to the four phylogenetic methods, secondly, that it is more accurate to interpret the relationships among taxa, and thirdly, that interesting conclusions on introgression and radiation can be drawn from the comparisons between SuperTRI and supermatrix analyses.

  20. Triangle-free graphs whose independence number equals the degree

    DEFF Research Database (Denmark)

    Brandt, Stephan

    2010-01-01

    In a triangle-free graph, the neighbourhood of every vertex is an independent set. We investigate the class S of triangle-free graphs where the neighbourhoods of vertices are maximum independent sets. Such a graph G must be regular of degree d = α (G) and the fractional chromatic number must sati...

  1. Reliability analysis of a sensitive and independent stabilometry parameter set.

    Science.gov (United States)

    Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.

  2. Reliability analysis of a sensitive and independent stabilometry parameter set

    Science.gov (United States)

    Nagymáté, Gergely; Orlovits, Zsanett

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938

  3. Testing the statistical compatibility of independent data sets

    International Nuclear Information System (INIS)

    Maltoni, M.; Schwetz, T.

    2003-01-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ 2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed

  4. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  5. Decomposing a planar graph into an independent set and a 3-degenerate graph

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2001-01-01

    We prove the conjecture made by O. V. Borodin in 1976 that the vertex set of every planar graph can be decomposed into an independent set and a set inducing a 3-degenerate graph. (C) 2001 Academic Press....

  6. Distributed-Memory Fast Maximal Independent Set

    Energy Technology Data Exchange (ETDEWEB)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-09-13

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluate their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.

  7. An application of the maximal independent set algorithm to course ...

    African Journals Online (AJOL)

    In this paper, we demonstrated one of the many applications of the Maximal Independent Set Algorithm in the area of course allocation. A program was developed in Pascal and used in implementing a modified version of the algorithm to assign teaching courses to available lecturers in any academic environment and it ...

  8. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  9. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  10. Decomposing a planar graph of girth 5 into an independent set and a forest

    DEFF Research Database (Denmark)

    Kawarabayashi, Ken-ichi; Thomassen, Carsten

    2009-01-01

    We use a list-color technique to extend the result of Borodin and Glebov that the vertex set of every planar graph of girth at least 5 can be partitioned into an independent set and a set which induces a forest. We apply this extension to also extend Grötzsch's theorem that every planar triangle-...

  11. Merrifield-simmons index and minimum number of independent sets in short trees

    DEFF Research Database (Denmark)

    Frendrup, Allan; Pedersen, Anders Sune; Sapozhenko, Alexander A.

    2013-01-01

    In Ars Comb. 84 (2007), 85-96, Pedersen and Vestergaard posed the problem of determining a lower bound for the number of independent sets in a tree of fixed order and diameter d. Asymptotically, we give here a complete solution for trees of diameter d...

  12. P wave dispersion and maximum P wave duration are independently associated with rapid renal function decline.

    Science.gov (United States)

    Su, Ho-Ming; Tsai, Wei-Chung; Lin, Tsung-Hsien; Hsu, Po-Chao; Lee, Wen-Hsien; Lin, Ming-Yen; Chen, Szu-Chia; Lee, Chee-Siong; Voon, Wen-Chol; Lai, Wen-Ter; Sheu, Sheng-Hsiung

    2012-01-01

    The P wave parameters measured by 12-lead electrocardiogram (ECG) are commonly used as noninvasive tools to assess for left atrial enlargement. There are limited studies to evaluate whether P wave parameters are independently associated with decline in renal function. Accordingly, the aim of this study is to assess whether P wave parameters are independently associated with progression to renal end point of ≥25% decline in estimated glomerular filtration rate (eGFR). This longitudinal study included 166 patients. The renal end point was defined as ≥25% decline in eGFR. We measured two ECG P wave parameters corrected by heart rate, i.e. corrected P wave dispersion (PWdisperC) and corrected P wave maximum duration (PWdurMaxC). Heart function and structure were measured from echocardiography. Clinical data, P wave parameters, and echocardiographic measurements were compared and analyzed. Forty-three patients (25.9%) reached renal end point. Kaplan-Meier curves for renal end point-free survival showed PWdisperC > median (63.0 ms) (log-rank P = 0.004) and PWdurMaxC > median (117.9 ms) (log-rank Pfunction decline.

  13. Maximal independent set graph partitions for representations of body-centered cubic lattices

    DEFF Research Database (Denmark)

    Erleben, Kenny

    2009-01-01

    corresponding to the leaves of a quad-tree thus has a smaller memory foot-print. The adjacency information in the graph relieves one from going up and down the quad-tree when searching for neighbors. This results in constant time complexities for refinement and coarsening operations.......A maximal independent set graph data structure for a body-centered cubic lattice is presented. Refinement and coarsening operations are defined in terms of set-operations resulting in robust and easy implementation compared to a quad-tree-based implementation. The graph only stores information...

  14. Scope of physician procedures independently billed by mid-level providers in the office setting.

    Science.gov (United States)

    Coldiron, Brett; Ratnarathorn, Mondhipa

    2014-11-01

    Mid-level providers (nurse practitioners and physician assistants) were originally envisioned to provide primary care services in underserved areas. This study details the current scope of independent procedural billing to Medicare of difficult, invasive, and surgical procedures by medical mid-level providers. To understand the scope of independent billing to Medicare for procedures performed by mid-level providers in an outpatient office setting for a calendar year. Analyses of the 2012 Medicare Physician/Supplier Procedure Summary Master File, which reflects fee-for-service claims that were paid by Medicare, for Current Procedural Terminology procedures independently billed by mid-level providers. Outpatient office setting among health care providers. The scope of independent billing to Medicare for procedures performed by mid-level providers. In 2012, nurse practitioners and physician assistants billed independently for more than 4 million procedures at our cutoff of 5000 paid claims per procedure. Most (54.8%) of these procedures were performed in the specialty area of dermatology. The findings of this study are relevant to safety and quality of care. Recently, the shortage of primary care clinicians has prompted discussion of widening the scope of practice for mid-level providers. It would be prudent to temper widening the scope of practice of mid-level providers by recognizing that mid-level providers are not solely limited to primary care, and may involve procedures for which they may not have formal training.

  15. Independent predictors of tuberculosis mortality in a high HIV prevalence setting: a retrospective cohort study.

    Science.gov (United States)

    Pepper, Dominique J; Schomaker, Michael; Wilkinson, Robert J; de Azevedo, Virginia; Maartens, Gary

    2015-01-01

    Identifying those at increased risk of death during TB treatment is a priority in resource-constrained settings. We performed this study to determine predictors of mortality during TB treatment. We performed a retrospective analysis of a TB surveillance population in a high HIV prevalence area that was recorded in ETR.net (Electronic Tuberculosis Register). Adult TB cases initiated TB treatment from 2007 through 2009 in Khayelitsha, South Africa. Cox proportional hazards models were used to identify risk factors for death (after multiple imputations for missing data). Model selection was performed using Akaike's Information Criterion to obtain the most relevant predictors of death. Of 16,209 adult TB cases, 851 (5.3 %) died during TB treatment. In all TB cases, advancing age, co-infection with HIV, a prior history of TB and the presence of both pulmonary and extra-pulmonary TB were independently associated with an increasing hazard of death. In HIV-infected TB cases, advancing age and female gender were independently associated with an increasing hazard of death. Increasing CD4 counts and antiretroviral treatment during TB treatment were protective against death. In HIV-uninfected TB cases, advancing age was independently associated with death, whereas smear-positive disease was protective. We identified several independent predictors of death during TB treatment in resource-constrained settings. Our findings inform resource-constrained settings about certain subgroups of TB patients that should be targeted to improve mortality during TB treatment.

  16. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  17. Generalizations of the subject-independent feature set for music-induced emotion recognition.

    Science.gov (United States)

    Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping

    2011-01-01

    Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.

  18. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  19. Quality indicators to compare accredited independent pharmacies and accredited chain pharmacies in Thailand.

    Science.gov (United States)

    Arkaravichien, Wiwat; Wongpratat, Apichaya; Lertsinudom, Sunee

    2016-08-01

    Background Quality indicators determine the quality of actual practice in reference to standard criteria. The Community Pharmacy Association (Thailand), with technical support from the International Pharmaceutical Federation, developed a tool for quality assessment and quality improvement at community pharmacies. This tool has passed validity and reliability tests, but has not yet had feasibility testing. Objective (1) To test whether this quality tool could be used in routine settings. (2) To compare quality scores between accredited independent and accredited chain pharmacies. Setting Accredited independent pharmacies and accredited chain pharmacies in the north eastern region of Thailand. Methods A cross sectional study was conducted in 34 accredited independent pharmacies and accredited chain pharmacies. Quality scores were assessed by observation and by interviewing the responsible pharmacists. Data were collected and analyzed by independent t-test and Mann-Whitney U test as appropriate. Results were plotted by histogram and spider chart. Main outcome measure Domain's assessable scores, possible maximum scores, mean and median of measured scores. Results Domain's assessable scores were close to domain's possible maximum scores. This meant that most indicators could be assessed in most pharmacies. The spider chart revealed that measured scores in the personnel, drug inventory and stocking, and patient satisfaction and health promotion domains of chain pharmacies were significantly higher than those of independent pharmacies (p pharmacies and chain pharmacies in the premise and facility or dispensing and patient care domains. Conclusion Quality indicators developed by the Community Pharmacy Association (Thailand) could be used to assess quality of practice in pharmacies in routine settings. It is revealed that the quality scores of chain pharmacies were higher than those of independent pharmacies.

  20. Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches

    Directory of Open Access Journals (Sweden)

    Michael J. Markham

    2011-07-01

    Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.

  1. Temporal and Spatial Independent Component Analysis for fMRI Data Sets Embedded in the AnalyzeFMRI R Package

    Directory of Open Access Journals (Sweden)

    Pierre Lafaye de Micheaux

    2011-10-01

    Full Text Available For statistical analysis of functional magnetic resonance imaging (fMRI data sets, we propose a data-driven approach based on independent component analysis (ICA implemented in a new version of the AnalyzeFMRI R package. For fMRI data sets, spatial dimension being much greater than temporal dimension, spatial ICA is the computationally tractable approach generally proposed. However, for some neuroscientific applications, temporal independence of source signals can be assumed and temporal ICA becomes then an attractive exploratory technique. In this work, we use a classical linear algebra result ensuring the tractability of temporal ICA. We report several experiments on synthetic data and real MRI data sets that demonstrate the potential interest of our R package.

  2. Transversals and independence in linear hypergraphs with maximum degree two

    DEFF Research Database (Denmark)

    Henning, Michael A.; Yeo, Anders

    2017-01-01

    , k-uniform hypergraphs with maximum degree 2. It is known [European J. Combin. 36 (2014), 231–236] that if H ∈ Hk, then (k + 1)τ (H) 6 ≤ n + m, and there are only two hypergraphs that achieve equality in the bound. In this paper, we prove a much more powerful result, and establish tight upper bounds...

  3. EEG-based recognition of video-induced emotions: selecting subject-independent feature set.

    Science.gov (United States)

    Kortelainen, Jukka; Seppänen, Tapio

    2013-01-01

    Emotions are fundamental for everyday life affecting our communication, learning, perception, and decision making. Including emotions into the human-computer interaction (HCI) could be seen as a significant step forward offering a great potential for developing advanced future technologies. While the electrical activity of the brain is affected by emotions, offers electroencephalogram (EEG) an interesting channel to improve the HCI. In this paper, the selection of subject-independent feature set for EEG-based emotion recognition is studied. We investigate the effect of different feature sets in classifying person's arousal and valence while watching videos with emotional content. The classification performance is optimized by applying a sequential forward floating search algorithm for feature selection. The best classification rate (65.1% for arousal and 63.0% for valence) is obtained with a feature set containing power spectral features from the frequency band of 1-32 Hz. The proposed approach substantially improves the classification rate reported in the literature. In future, further analysis of the video-induced EEG changes including the topographical differences in the spectral features is needed.

  4. An upper bound on the number of independent sets in a tree

    DEFF Research Database (Denmark)

    Pedersen, Anders Sune

    2007-01-01

    The main result of this paper is an upper bound on the number of independent sets in a tree in terms of the order and diameter of the tree. This new upper bound is a refinement of the bound given by Prodinger and Tichy [ Fibonacci Q., 20 (1982), no. 1, 16-21]. Finally, we give a sufficient...... condition for the new upper bound to be better thatn the upper bound given by Brigham, Chandrasekharan and Dutton [ Fibonacci Q., 31 (1993), no. 2, 98-104]....

  5. An upper bound on the number of independent sets in a tree

    DEFF Research Database (Denmark)

    Vestergaard, Preben Dahl; Pedersen, Anders Sune

    The main result of this paper is an upper bound on the number of independent sets in a tree in terms of the order and diameter of the tree. This new upper bound is a refinement of the bound given by Prodinger and Tichy [Fibonacci Q., 20 (1982), no. 1, 16-21]. Finally, we give a sufficient condition...... for the new upper bound to be better than the upper bound given by Brigham, Chandrasekharan and Dutton [Fibonacci Q., 31 (1993), no. 2, 98-104]....

  6. Evaluating Fast Maximum Likelihood-Based Phylogenetic Programs Using Empirical Phylogenomic Data Sets

    Science.gov (United States)

    Zhou, Xiaofan; Shen, Xing-Xing; Hittinger, Chris Todd

    2018-01-01

    Abstract The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs’ relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses. PMID:29177474

  7. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  8. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)

    2015-05-26

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R

  9. Experimental Implementation of a Kochen-Specker Set of Quantum Tests

    Directory of Open Access Journals (Sweden)

    Vincenzo D’Ambrosio

    2013-02-01

    Full Text Available The conflict between classical and quantum physics can be identified through a series of yes-no tests on quantum systems, without it being necessary that these systems be in special quantum states. Kochen-Specker (KS sets of yes-no tests have this property and provide a quantum-versus-classical advantage that is free of the initialization problem that affects some quantum computers. Here, we report the first experimental implementation of a complete KS set that consists of 18 yes-no tests on four-dimensional quantum systems and show how to use the KS set to obtain a state-independent quantum advantage. We first demonstrate the unique power of this KS set for solving a task while avoiding the problem of state initialization. Such a demonstration is done by showing that, for 28 different quantum states encoded in the orbital-angular-momentum and polarization degrees of freedom of single photons, the KS set provides an impossible-to-beat solution. In a second experiment, we generate maximally contextual quantum correlations by performing compatible sequential measurements of the polarization and path of single photons. In this case, state independence is demonstrated for 15 different initial states. Maximum contextuality and state independence follow from the fact that the sequences of measurements project any initial quantum state onto one of the KS set’s eigenstates. Our results show that KS sets can be used for quantum-information processing and quantum computation and pave the way for future developments.

  10. Experimental Implementation of a Kochen-Specker Set of Quantum Tests

    Science.gov (United States)

    D'Ambrosio, Vincenzo; Herbauts, Isabelle; Amselem, Elias; Nagali, Eleonora; Bourennane, Mohamed; Sciarrino, Fabio; Cabello, Adán

    2013-01-01

    The conflict between classical and quantum physics can be identified through a series of yes-no tests on quantum systems, without it being necessary that these systems be in special quantum states. Kochen-Specker (KS) sets of yes-no tests have this property and provide a quantum-versus-classical advantage that is free of the initialization problem that affects some quantum computers. Here, we report the first experimental implementation of a complete KS set that consists of 18 yes-no tests on four-dimensional quantum systems and show how to use the KS set to obtain a state-independent quantum advantage. We first demonstrate the unique power of this KS set for solving a task while avoiding the problem of state initialization. Such a demonstration is done by showing that, for 28 different quantum states encoded in the orbital-angular-momentum and polarization degrees of freedom of single photons, the KS set provides an impossible-to-beat solution. In a second experiment, we generate maximally contextual quantum correlations by performing compatible sequential measurements of the polarization and path of single photons. In this case, state independence is demonstrated for 15 different initial states. Maximum contextuality and state independence follow from the fact that the sequences of measurements project any initial quantum state onto one of the KS set’s eigenstates. Our results show that KS sets can be used for quantum-information processing and quantum computation and pave the way for future developments.

  11. Invariant Set Theory: Violating Measurement Independence without Fine Tuning, Conspiracy, Constraints on Free Will or Retrocausality

    Directory of Open Access Journals (Sweden)

    Tim Palmer

    2015-11-01

    Full Text Available Invariant Set (IS theory is a locally causal ontic theory of physics based on the Cosmological Invariant Set postulate that the universe U can be considered a deterministic dynamical system evolving precisely on a (suitably constructed fractal dynamically invariant set in U's state space. IS theory violates the Bell inequalities by violating Measurement Independence. Despite this, IS theory is not fine tuned, is not conspiratorial, does not constrain experimenter free will and does not invoke retrocausality. The reasons behind these claims are discussed in this paper. These arise from properties not found in conventional ontic models: the invariant set has zero measure in its Euclidean embedding space, has Cantor Set structure homeomorphic to the p-adic integers (p>>0 and is non-computable. In particular, it is shown that the p-adic metric encapulates the physics of the Cosmological Invariant Set postulate, and provides the technical means to demonstrate no fine tuning or conspiracy. Quantum theory can be viewed as the singular limit of IS theory when when p is set equal to infinity. Since it is based around a top-down constraint from cosmology, IS theory suggests that gravitational and quantum physics will be unified by a gravitational theory of the quantum, rather than a quantum theory of gravity. Some implications arising from such a perspective are discussed.

  12. Are Independent Fiscal Institutions Really Independent?

    Directory of Open Access Journals (Sweden)

    Slawomir Franek

    2015-08-01

    Full Text Available In the last decade the number of independent fiscal institutions (known also as fiscal councils has tripled. They play an important oversight role over fiscal policy-making in democratic societies, especially as they seek to restore public finance stability in the wake of the recent financial crisis. Although common functions of such institutions include a role in analysis of fiscal policy, forecasting, monitoring compliance with fiscal rules or costing of spending proposals, their roles, resources and structures vary considerably across countries. The aim of the article is to determine the degree of independence of such institutions based on the analysis of the independence index of independent fiscal institutions. The analysis of this index values may be useful to determine the relations between the degree of independence of fiscal councils and fiscal performance of particular countries. The data used to calculate the index values will be derived from European Commission and IMF, which collect sets of information about characteristics of activity of fiscal councils.

  13. Design and optimization of a modal- independent linear ultrasonic motor.

    Science.gov (United States)

    Zhou, Shengli; Yao, Zhiyuan

    2014-03-01

    To simplify the design of the linear ultrasonic motor (LUSM) and improve its output performance, a method of modal decoupling for LUSMs is proposed in this paper. The specific embodiment of this method is decoupling of the traditional LUSM stator's complex vibration into two simple vibrations, with each vibration implemented by one vibrator. Because the two vibrators are designed independently, their frequencies can be tuned independently and frequency consistency is easy to achieve. Thus, the method can simplify the design of the LUSM. Based on this method, a prototype modal- independent LUSM is designed and fabricated. The motor reaches its maximum thrust force of 47 N, maximum unloaded speed of 0.43 m/s, and maximum power of 7.85 W at applied voltage of 200 Vpp. The motor's structure is then optimized by controlling the difference between the two vibrators' resonance frequencies to reach larger output speed, thrust, and power. The optimized results show that when the frequency difference is 73 Hz, the output force, speed, and power reach their maximum values. At the input voltage of 200 Vpp, the motor reaches its maximum thrust force of 64.2 N, maximum unloaded speed of 0.76 m/s, maximum power of 17.4 W, maximum thrust-weight ratio of 23.7, and maximum efficiency of 39.6%.

  14. Mutational Profiling Can Establish Clonal or Independent Origin in Synchronous Bilateral Breast and Other Tumors.

    Directory of Open Access Journals (Sweden)

    Lei Bao

    Full Text Available Synchronous tumors can be independent primary tumors or a primary-metastatic (clonal pair, which may have clinical implications. Mutational profiling of tumor DNA is increasingly common in the clinic. We investigated whether mutational profiling can distinguish independent from clonal tumors in breast and other cancers, using a carefully defined test based on the Clonal Likelihood Score (CLS = 100 x # shared high confidence (HC mutations/ # total HC mutations.Statistical properties of a formal test using the CLS were investigated. A high CLS is evidence in favor of clonality; the test is implemented as a one-sided binomial test of proportions. Test parameters were empirically determined using 16,422 independent breast tumor pairs and 15 primary-metastatic tumor pairs from 10 cancer types using The Cancer Genome Atlas.We validated performance of the test with its established parameters, using five published data sets comprising 15,758 known independent tumor pairs (maximum CLS = 4.1%, minimum p-value = 0.48 and 283 known tumor clonal pairs (minimum CLS 13%, maximum p-value 0.99, supporting independence. A plausible molecular mechanism for the shift from hormone receptor positive to triple negative was identified in the clonal pair.We have developed the statistical properties of a carefully defined Clonal Likelihood Score test from mutational profiling of tumor DNA. Under identified conditions, the test appears to reliably distinguish between synchronous tumors of clonal and of independent origin in several cancer types. This approach may have scientific and clinical utility.

  15. Analyzing ROC curves using the effective set-size model

    Science.gov (United States)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical

  16. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  17. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  18. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  19. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    OpenAIRE

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth...

  20. Maximum-confidence discrimination among symmetric qudit states

    International Nuclear Information System (INIS)

    Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.

    2011-01-01

    We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.

  1. Better firm performance through board independence in a two-tier setting

    DEFF Research Database (Denmark)

    Schøler, Finn; Holm, Claus

    2013-01-01

    independence, these were retrieved from different sections in the corresponding annual reports, i.e. from different notes and different parts of the management commentary. We used structured equations models (IBM SPSS, AMOS 19) to model the hypothesized relationship between board independence and performance...

  2. Encoding Strategy for Maximum Noise Tolerance Bidirectional Associative Memory

    National Research Council Canada - National Science Library

    Shen, Dan

    2003-01-01

    In this paper, the Basic Bidirectional Associative Memory (BAM) is extended by choosing weights in the correlation matrix, for a given set of training pairs, which result in a maximum noise tolerance set for BAM...

  3. Compact printed multiband antenna with independent setting suitable for fixed and reconfigurable wireless communication systems

    KAUST Repository

    Abutarboush, Hattan

    2012-08-01

    This paper presents the design of a low-profile compact printed antenna for fixed frequency and reconfigurable frequency bands. The antenna consists of a main patch, four sub-patches, and a ground plane to generate five frequency bands, at 0.92, 1.73, 1.98, 2.4, and 2.9 GHz, for different wireless systems. For the fixed-frequency design, the five individual frequency bands can be adjusted and set independently over the wide ranges of 18.78%, 22.75%, 4.51%, 11%, and 8.21%, respectively, using just one parameter of the antenna. By putting a varactor (diode) at each of the sub-patch inputs, four of the frequency bands can be controlled independently over wide ranges and the antenna has a reconfigurable design. The tunability ranges for the four bands of 0.92, 1.73, 1.98, and 2.9 GHz are 23.5%, 10.30%, 13.5%, and 3%, respectively. The fixed and reconfigurable designs are studied using computer simulation. For verification of simulation results, the two designs are fabricated and the prototypes are measured. The results show a good agreement between simulated and measured results. © 1963-2012 IEEE.

  4. Compact printed multiband antenna with independent setting suitable for fixed and reconfigurable wireless communication systems

    KAUST Repository

    Abutarboush, Hattan; Nilavalan, Rajagopal; Cheung, Sing Wai; Nasr, Karim Medhat A

    2012-01-01

    This paper presents the design of a low-profile compact printed antenna for fixed frequency and reconfigurable frequency bands. The antenna consists of a main patch, four sub-patches, and a ground plane to generate five frequency bands, at 0.92, 1.73, 1.98, 2.4, and 2.9 GHz, for different wireless systems. For the fixed-frequency design, the five individual frequency bands can be adjusted and set independently over the wide ranges of 18.78%, 22.75%, 4.51%, 11%, and 8.21%, respectively, using just one parameter of the antenna. By putting a varactor (diode) at each of the sub-patch inputs, four of the frequency bands can be controlled independently over wide ranges and the antenna has a reconfigurable design. The tunability ranges for the four bands of 0.92, 1.73, 1.98, and 2.9 GHz are 23.5%, 10.30%, 13.5%, and 3%, respectively. The fixed and reconfigurable designs are studied using computer simulation. For verification of simulation results, the two designs are fabricated and the prototypes are measured. The results show a good agreement between simulated and measured results. © 1963-2012 IEEE.

  5. Non-local setting and outcome information for violation of Bell's inequality

    International Nuclear Information System (INIS)

    Pawlowski, Marcin; Kofler, Johannes; Paterek, Tomasz; Brukner, Caslav; Seevinck, Michael

    2010-01-01

    Bell's theorem is a no-go theorem stating that quantum mechanics cannot be reproduced by a physical theory based on realism, freedom to choose experimental settings and two locality conditions: setting (SI) and outcome (OI) independence. We provide a novel analysis of what it takes to violate Bell's inequality within the framework in which both realism and freedom of choice are assumed, by showing that it is impossible to model a violation without having information in one laboratory about both the setting and the outcome at the distant one. While it is possible that outcome information can be revealed from shared hidden variables, the assumed experimenter's freedom to choose the settings ensures that the setting information must be non-locally transferred even when the SI condition is obeyed. The amount of transmitted information about the setting that is sufficient to violate the CHSH inequality up to its quantum mechanical maximum is 0.736 bits.

  6. Einstein-Dirac theory in spin maximum I

    International Nuclear Information System (INIS)

    Crumeyrolle, A.

    1975-01-01

    An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr

  7. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  8. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  9. Infinite set of relevant operators for an exact solution of the time-dependent Jaynes-Cummings Hamiltonian

    International Nuclear Information System (INIS)

    Gruver, J.L.; Aliaga, J.; Cerdeira, H.A.; Proto, A.N.

    1995-03-01

    The dynamics and thermodynamics of a quantum time-dependent field coupled to a two-level system, well known as the Jaynes-Cummings Hamiltonian, is studied, using the maximum entropy principle. In the framework of this approach we found three different infinite sets of relevant operators that describe the dynamics of the system for any temporal dependence. These sets of relevant operators are connected by isomorphisms, which allow us to consider the case of mixed initial conditions. A consistent set of initial conditions is established using the maximum entropy principle density operator, obtaining restrictions to the physically feasible initial conditions of the system. The behaviour of the population inversion is shown for different time dependencies of the Hamiltonian and initial conditions. For the time-independent case, an explicit solution for the population inversion in terms of the relevant operators of one of the sets is given. It is also shown how the well-known formulas for the population inversion are recovered for the special cases where the initial conditions correspond to a pure, coherent, and thermal field. (author). 35 refs, 9 figs

  10. Tsallis distribution as a standard maximum entropy solution with 'tail' constraint

    International Nuclear Information System (INIS)

    Bercher, J.-F.

    2008-01-01

    We show that Tsallis' distributions can be derived from the standard (Shannon) maximum entropy setting, by incorporating a constraint on the divergence between the distribution and another distribution imagined as its tail. In this setting, we find an underlying entropy which is the Renyi entropy. Furthermore, escort distributions and generalized means appear as a direct consequence of the construction. Finally, the 'maximum entropy tail distribution' is identified as a Generalized Pareto Distribution

  11. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  12. Maximum tumor diameter is not an independent prognostic factor in high-risk localized prostate cancer

    NARCIS (Netherlands)

    Oort, van I.M.; Witjes, J.A.; Kok, D.E.G.; Kiemeney, L.A.; Hulsbergen-van de Kaa, C.A.

    2008-01-01

    Previous studies suggest that maximum tumor diameter (MTD) is a predictor of recurrence in prostate cancer (PC). This study investigates the prognostic value of MTD for biochemical recurrence (BCR) in patients with PC, after radical prostatectomy (RP), with emphasis on high-risk localized prostate

  13. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  14. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  15. The renormalization scale-setting problem in QCD

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xing-Gang [Chongqing Univ. (China); Brodsky, Stanley J. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Mojaza, Matin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Univ. of Southern Denmark, Odense (Denmark)

    2013-09-01

    A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this ad hoc procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of the scheme or other theoretical conventions. We review current ideas and points of view on how to deal with the renormalization scale ambiguity and show how to obtain renormalization scheme- and scale-independent estimates. We begin by introducing the renormalization group (RG) equation and an extended version, which expresses the invariance of physical observables under both the renormalization scheme and scale-parameter transformations. The RG equation provides a convenient way for estimating the scheme- and scale-dependence of a physical process. We then discuss self-consistency requirements of the RG equations, such as reflexivity, symmetry, and transitivity, which must be satisfied by a scale-setting method. Four typical scale setting methods suggested in the literature, i.e., the Fastest Apparent Convergence (FAC) criterion, the Principle of Minimum Sensitivity (PMS), the Brodsky–Lepage–Mackenzie method (BLM), and the Principle of Maximum Conformality (PMC), are introduced. Basic properties and their applications are discussed. We pay particular attention to the PMC, which satisfies all of the requirements of RG invariance. Using the PMC, all non-conformal terms associated with the β-function in the perturbative series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC provides the principle underlying the BLM method, since it gives the general rule for extending

  16. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  17. The spectrum of R Cygni during its exceptionally low maximum of 1983

    International Nuclear Information System (INIS)

    Wallerstein, G.; Dominy, J.F.; Mattei, J.A.; Smith, V.V.

    1985-01-01

    In 1983 R Cygni experienced its faintest maximum ever recorded. A study of the light curve shows correlations between brightness at maximum and interval from the previous cycle, in the sense that fainter maxima occur later than normal and are followed by maxima that occur earlier than normal. Emission and absorption lines in the optical and near infrared (2.2 μm region) reveal two significant correlations. The amplitude of line doubling is independent of the magnitude at maximum for msub(v)(max)=7.1 to 9.8. The velocities of the emission lines, however, correlate with the magnitude at maximum, in that during bright maxima they are negatively displaced by 15 km s -1 with respect to the red component of absorption lines, while during the faintest maximum there is no displacement. (author)

  18. Inaugural Maximum Values for Sodium in Processed Food Products in the Americas.

    Science.gov (United States)

    Campbell, Norm; Legowski, Barbara; Legetic, Branka; Nilson, Eduardo; L'Abbé, Mary

    2015-08-01

    Reducing dietary salt/sodium is one of the most cost-effective interventions to improve population health. There are five initiatives in the Americas that independently developed targets for reformulating foods to reduce salt/sodium content. Applying selection criteria, recommended by the Pan American Health Organization (PAHO)/World Health Organization (WHO) Technical Advisory Group on Dietary Salt/Sodium Reduction, a consortium of governments, civil society, and food companies (the Salt Smart Consortium) agreed to an inaugural set of regional maximum targets (upper limits) for salt/sodium levels for 11 food categories, to be achieved by December 2016. Ultimately, to substantively reduce dietary salt across whole populations, targets will be needed for the majority of processed and pre-prepared foods. Cardiovascular and hypertension organizations are encouraged to utilize the regional targets in advocacy and in monitoring and evaluation of progress by the food industry. © 2015 Wiley Periodicals, Inc.

  19. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  20. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  1. Tutte sets in graphs I: Maximal tutte sets and D-graphs

    NARCIS (Netherlands)

    Bauer, D.; Broersma, Haitze J.; Morgana, A.; Schmeichel, E.

    A well-known formula of Tutte and Berge expresses the size of a maximum matching in a graph $G$ in terms of what is usually called the deficiency of $G$. A subset $X$ of $V(G)$ for which this deficiency is attained is called a Tutte set of $G$. While much is known about maximum matchings, less is

  2. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  3. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  4. Assessing compatibility of direct detection data: halo-independent global likelihood analyses

    Energy Technology Data Exchange (ETDEWEB)

    Gelmini, Graciela B. [Department of Physics and Astronomy, UCLA,475 Portola Plaza, Los Angeles, CA 90095 (United States); Huh, Ji-Haeng [CERN Theory Division,CH-1211, Geneva 23 (Switzerland); Witte, Samuel J. [Department of Physics and Astronomy, UCLA,475 Portola Plaza, Los Angeles, CA 90095 (United States)

    2016-10-18

    We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be compared with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.

  5. Thermoelectric cooler concepts and the limit for maximum cooling

    International Nuclear Information System (INIS)

    Seifert, W; Hinsche, N F; Pluschke, V

    2014-01-01

    The conventional analysis of a Peltier cooler approximates the material properties as independent of temperature using a constant properties model (CPM). Alternative concepts have been published by Bian and Shakouri (2006 Appl. Phys. Lett. 89 212101), Bian (et al 2007 Phys. Rev. B 75 245208) and Snyder et al (2012 Phys. Rev. B 86 045202). While Snyder's Thomson cooler concept results from a consideration of compatibility, the method of Bian et al focuses on the redistribution of heat. Thus, both approaches are based on different principles. In this paper we compare the new concepts to CPM and we reconsider the limit for maximum cooling. The results provide a new perspective on maximum cooling. (paper)

  6. Maximum Permissible Concentrations and Negligible Concentrations for pesticides

    NARCIS (Netherlands)

    Crommentuijn T; Kalf DF; Polder MD; Posthumus R; Plassche EJ van de; CSR

    1997-01-01

    Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) derived for a series of pesticides are presented in this report. These MPCs and NCs are used by the Ministry of Housing, Spatial Planning and the Environment (VROM) to set Environmental Quality Objectives. For some of the

  7. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  8. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  9. High-Throughput Tabular Data Processor - Platform independent graphical tool for processing large data sets.

    Science.gov (United States)

    Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).

  10. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  11. Effects of fasting on maximum thermogenesis in temperature-acclimated rats

    Science.gov (United States)

    Wang, L. C. H.

    1981-09-01

    To further investigate the limiting effect of substrates on maximum thermogenesis in acute cold exposure, the present study examined the prevalence of this effect at different thermogenic capabilities consequent to cold- or warm-acclimation. Male Sprague-Dawley rats (n=11) were acclimated to 6, 16 and 26‡C, in succession, their thermogenic capabilities after each acclimation temperature were measured under helium-oxygen (21% oxygen, balance helium) at -10‡C after overnight fasting or feeding. Regardless of feeding conditions, both maximum and total heat production were significantly greater in 6>16>26‡C-acclimated conditions. In the fed state, the total heat production was significantly greater than that in the fasted state at all acclimating temperatures but the maximum thermogenesis was significant greater only in the 6 and 16‡C-acclimated states. The results indicate that the limiting effect of substrates on maximum and total thermogenesis is independent of the magnitude of thermogenic capability, suggesting a substrate-dependent component in restricting the effective expression of existing aerobic metabolic capability even under severe stress.

  12. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  13. Chronic Effects of Different Rest Intervals Between Sets on Dynamic and Isometric Muscle Strength and Muscle Activity in Trained Older Women.

    Science.gov (United States)

    Jambassi Filho, José Claudio; Gurjão, André Luiz Demantova; Ceccato, Marilia; Prado, Alexandre Konig Garcia; Gallo, Luiza Herminia; Gobbi, Sebastião

    2017-09-01

    This study investigated the chronic effects of different rest intervals (RIs) between sets on dynamic and isometric muscle strength and muscle activity. We used a repeated-measures design (pretraining and posttraining) with independent groups (different RI). Twenty-one resistance-trained older women (66.4 ± 4.4 years) were randomly assigned to either a 1-minute RI group (G-1 min; n = 10) or 3-minute RI group (G-3 min; n = 11). Both groups completed 3 supervised sessions per week during 8 weeks. In each session, participants performed 3 sets of 15 repetitions of leg press exercise, with a load that elicited muscle failure in the third set. Fifteen maximum repetitions, maximal voluntary contraction, peak rate of force development, and integrated electromyography activity of the vastus lateralis and vastus medialis muscles were assessed pretraining and posttraining. There was a significant increase in load of 15 maximum repetitions posttraining for G-3 min only (3.6%; P 0.05). The findings suggest that different RIs between sets did not influence dynamic and isometric muscle strength and muscle activity in resistance-trained older women.

  14. Some Results on the Independence Polynomial of Unicyclic Graphs

    Directory of Open Access Journals (Sweden)

    Oboudi Mohammad Reza

    2018-05-01

    Full Text Available Let G be a simple graph on n vertices. An independent set in a graph is a set of pairwise non-adjacent vertices. The independence polynomial of G is the polynomial I(G,x=∑k=0ns(G,kxk$I(G,x = \\sum\

  15. Well-covered graphs and factors

    DEFF Research Database (Denmark)

    Randerath, Bert; Vestergaard, Preben D.

    2006-01-01

    A maximum independent set of vertices in a graph is a set of pairwise nonadjacent vertices of largest cardinality α. Plummer defined a graph to be well-covered, if every independent set is contained in a maximum independent set of G. Every well-covered graph G without isolated vertices has a perf...

  16. Stochastic behavior of a cold standby system with maximum repair time

    Directory of Open Access Journals (Sweden)

    Ashish Kumar

    2015-09-01

    Full Text Available The main aim of the present paper is to analyze the stochastic behavior of a cold standby system with concept of preventive maintenance, priority and maximum repair time. For this purpose, a stochastic model is developed in which initially one unit is operative and other is kept as cold standby. There is a single server who visits the system immediately as and when required. The server takes the unit under preventive maintenance after a maximum operation time at normal mode if one standby unit is available for operation. If the repair of the failed unit is not possible up to a maximum repair time, failed unit is replaced by new one. The failure time, maximum operation time and maximum repair time distributions of the unit are considered as exponentially distributed while repair and maintenance time distributions are considered as arbitrary. All random variables are statistically independent and repairs are perfect. Various measures of system effectiveness are obtained by using the technique of semi-Markov process and RPT. To highlight the importance of the study numerical results are also obtained for MTSF, availability and profit function.

  17. Enhanced dynamic wedge and independent monitor unit verification

    International Nuclear Information System (INIS)

    Howlett, S.J.; University of Newcastle, NSW

    2004-01-01

    Full text: Some serious radiation accidents have occurred around the world during the delivery of radiotherapy treatment. The regrettable incident in Panama clearly indicated the need for independent monitor unit (MU) verification. Indeed the International Atomic Energy Agency (IAEA), after investigating the incident, made specific recommendations for radiotherapy centres which included an independent monitor unit check for all treatments. Independent monitor unit verification is practiced in many radiotherapy centres in developed countries around the world. It is mandatory in USA but not yet in Australia. The enhanced dynamic wedge factor (EDWF) presents some significant problems in accurate MU calculation, particularly in the case of non centre of field position (COF). This paper describes development of an independent MU program, concentrating on the implementation of the EDW component. The difficult case of non COF points under the EDW was studied in detail. A survey of Australasian centres regarding the use of independent MU check systems was conducted. The MUCalculator was developed with reference to MU calculations made by Pinnacle 3D RTP system (Philips) for 4MV, 6MV and 18MV X-ray beams from Varian machines used at the Newcastle Mater Misericordiae Hospital (NMMH) in the clinical environment. Ionisation chamber measurements in solid water TM and liquid water were performed based on a published test data set. Published algorithms combined with a depth dependent profile correction were applied in an attempt to match measured data with maximum accuracy. The survey results are presented. Substantial data is presented in tabular form and extensive comparison with published data. Several different methods for calculating EDWF are examined. A small systematic error was detected in the Gibbon equation used for the EDW calculations. Generally, calculations were within +2% of measured values, although some setups exceeded this variation. Results indicate that COF

  18. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  19. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  20. An Independent Filter for Gene Set Testing Based on Spectral Enrichment

    NARCIS (Netherlands)

    Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H

    2015-01-01

    Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in

  1. The Scottish Independence Referendum and After

    Directory of Open Access Journals (Sweden)

    Michael Keating

    2015-04-01

    Full Text Available The Scottish independence referendum on 18 September 2014 produced an apparently decisive result, with 45 per cent for independence and 55 per cent against. Yet, it has not settled the constitutional issue. There was a huge public engagement in the campaign, which has left a legacy for Scottish and UK politics. Scotland has been reinforced as a political community. The losing Yes side has emerged in better shape and more optimistic, while the winners have struggled to formulate the better autonomy package they had promised. Public opinion continues to favour maximum devolution short of independence. Scotland is a case of the kind of spatial rescaling that is taking place more generally across Europe, as new forms of statehood and of sovereignty evolve. Scottish public opinion favours more self-government but no longer recognizes the traditional nation-state model presented in the referendum question.

  2. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  3. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    Science.gov (United States)

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  4. Independent preferences

    DEFF Research Database (Denmark)

    Vind, Karl

    1991-01-01

    A simple mathematical result characterizing a subset of a product set is proved and used to obtain additive representations of preferences. The additivity consequences of independence assumptions are obtained for preferences which are not total or transitive. This means that most of the economic ...... theory based on additive preferences - expected utility, discounted utility - has been generalized to preferences which are not total or transitive. Other economic applications of the theorem are given...

  5. Maximum power point tracker for photovoltaic power plants

    Science.gov (United States)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  6. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  7. Environmental Influences on Independent Collaborative Play

    Science.gov (United States)

    Mawson, Brent

    2010-01-01

    Data from two qualitative research projects indicated a relationship between the type of early childhood setting and children's independent collaborative play. The first research project involved 22 three and four-year-old children in a daylong setting and 47 children four-year-old children in a sessional kindergarten. The second project involved…

  8. 49 CFR 229.73 - Wheel sets.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Wheel sets. 229.73 Section 229.73 Transportation... TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Safety Requirements Suspension System § 229.73 Wheel sets. (a...) when applied or turned. (b) The maximum variation in the diameter between any two wheel sets in a three...

  9. 49 CFR Appendix B to Part 386 - Penalty Schedule; Violations and Maximum Civil Penalties

    Science.gov (United States)

    2010-10-01

    ... Maximum Civil Penalties The Debt Collection Improvement Act of 1996 [Public Law 104-134, title III... civil penalties set out in paragraphs (e)(1) through (4) of this appendix results in death, serious... 49 Transportation 5 2010-10-01 2010-10-01 false Penalty Schedule; Violations and Maximum Civil...

  10. Maximum margin classifier working in a set of strings.

    Science.gov (United States)

    Koyano, Hitoshi; Hayashida, Morihiro; Akutsu, Tatsuya

    2016-03-01

    Numbers and numerical vectors account for a large portion of data. However, recently, the amount of string data generated has increased dramatically. Consequently, classifying string data is a common problem in many fields. The most widely used approach to this problem is to convert strings into numerical vectors using string kernels and subsequently apply a support vector machine that works in a numerical vector space. However, this non-one-to-one conversion involves a loss of information and makes it impossible to evaluate, using probability theory, the generalization error of a learning machine, considering that the given data to train and test the machine are strings generated according to probability laws. In this study, we approach this classification problem by constructing a classifier that works in a set of strings. To evaluate the generalization error of such a classifier theoretically, probability theory for strings is required. Therefore, we first extend a limit theorem for a consensus sequence of strings demonstrated by one of the authors and co-workers in a previous study. Using the obtained result, we then demonstrate that our learning machine classifies strings in an asymptotically optimal manner. Furthermore, we demonstrate the usefulness of our machine in practical data analysis by applying it to predicting protein-protein interactions using amino acid sequences and classifying RNAs by the secondary structure using nucleotide sequences.

  11. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  12. Ranking Specific Sets of Objects.

    Science.gov (United States)

    Maly, Jan; Woltran, Stefan

    2017-01-01

    Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.

  13. Simulation model of ANN based maximum power point tracking controller for solar PV system

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)

    2011-02-15

    In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)

  14. Comparison of independent proxies in the reconstruction of deep ...

    African Journals Online (AJOL)

    Independent proxies were assessed in two Late Quaternary sediment cores from the eastern South Atlantic to compare deep-water changes during the last 400 kyr. ... is exclusively observed during interglacials, with maximum factor loadings in ... only slightly without a significant glacial-interglacial pattern, as measured in a ...

  15. Model independent foreground power spectrum estimation using WMAP 5-year data

    International Nuclear Information System (INIS)

    Ghosh, Tuhin; Souradeep, Tarun; Saha, Rajib; Jain, Pankaj

    2009-01-01

    In this paper, we propose and implement on WMAP 5 yr data a model independent approach of foreground power spectrum estimation for multifrequency observations of the CMB experiments. Recently, a model independent approach of CMB power spectrum estimation was proposed by Saha et al. 2006. This methodology demonstrates that the CMB power spectrum can be reliably estimated solely from WMAP data without assuming any template models for the foreground components. In the current paper, we extend this work to estimate the galactic foreground power spectrum using the WMAP 5 yr maps following a self-contained analysis. We apply the model independent method in harmonic basis to estimate the foreground power spectrum and frequency dependence of combined foregrounds. We also study the behavior of synchrotron spectral index variation over different regions of the sky. We use the full sky Haslam map as an external template to increase the degrees of freedom, while computing the synchrotron spectral index over the frequency range from 408 MHz to 94 GHz. We compare our results with those obtained from maximum entropy method foreground maps, which are formed in pixel space. We find that relative to our model independent estimates maximum entropy method maps overestimate the foreground power close to galactic plane and underestimates it at high latitudes.

  16. Gravitational Waves and the Maximum Spin Frequency of Neutron Stars

    NARCIS (Netherlands)

    Patruno, A.; Haskell, B.; D'Angelo, C.

    2012-01-01

    In this paper, we re-examine the idea that gravitational waves are required as a braking mechanism to explain the observed maximum spin frequency of neutron stars. We show that for millisecond X-ray pulsars, the existence of spin equilibrium as set by the disk/magnetosphere interaction is sufficient

  17. The scaling of maximum and basal metabolic rates of mammals and birds

    Science.gov (United States)

    Barbosa, Lauro A.; Garcia, Guilherme J. M.; da Silva, Jafferson K. L.

    2006-01-01

    Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here, we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as M, maximum heart rate as M, and muscular capillary density as M, in agreement with data.

  18. A Stochastic Maximum Principle for General Mean-Field Systems

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Li, Juan; Ma, Jin

    2016-01-01

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  19. A Stochastic Maximum Principle for General Mean-Field Systems

    Energy Technology Data Exchange (ETDEWEB)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr [Université de Bretagne-Occidentale, Département de Mathématiques (France); Li, Juan, E-mail: juanli@sdu.edu.cn [Shandong University, Weihai, School of Mathematics and Statistics (China); Ma, Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States)

    2016-12-15

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  20. 38 CFR 18.434 - Education setting.

    Science.gov (United States)

    2010-07-01

    ... not handicapped to the maximum extent appropriate to the needs of the handicapped person. A recipient shall place a handicapped person in the regular educational environment operated by the recipient unless... Adult Education § 18.434 Education setting. (a) Academic setting. A recipient shall educate, or shall...

  1. Participation of Children with Disabilities in Taiwan: The Gap between Independence and Frequency

    Science.gov (United States)

    Hwang, Ai-Wen; Yen, Chia-Feng; Liou, Tsan-Hon; Simeonsson, Rune J.; Chi, Wen-Chou; Lollar, Donald J.; Liao, Hua-Fang; Kang, Lin-Ju; Wu, Ting-Fang; Teng, Sue-Wen; Chiu, Wen-Ta

    2015-01-01

    Background Independence and frequency are two distinct dimensions of participation in daily life. The gap between independence and frequency may reflect the role of the environment on participation, but this distinction has not been fully explored. Methods A total of 18,119 parents or primary caregivers of children with disabilities aged 6.0-17.9 years were interviewed in a cross-sectional nationwide survey with the Functioning Scale of the Disability Evaluation System - Child version (FUNDES-Child). A section consisting of 20 items measured the children’s daily participation in 4 environmental settings: home, neighborhood/community, school, and home/community. Higher independence and frequency restriction scores indicated greater limitation of participation in daily activities. Scores for independence, frequency and independence-frequency gaps were examined across ages along with trend analysis. ANOVA was used to compare the gaps across settings and diagnoses for children with mild levels of severity of impairment. Findings A negative independence-frequency gap (restriction of frequency was greater than that of independence) was found for children with mild to severe levels of impairment. A positive gap (restriction of independence was greater than that of frequency) was found for children with profound levels of severity. The gaps became wider with age in most settings of children with mild impairment and different diagnoses. Widest negative gaps were found for the neighborhood/community settings than for the other three settings for children with mild to severe impairment. Conclusions Children’s participation and independence-frequency gaps depend not only on the severity of their impairments or diagnoses, but also on their age, the setting and the support provided by their environment. In Taiwan, more frequency restrictions than ability restrictions were found for children with mild to moderate severity, especially in the neighborhood/community setting, and

  2. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  3. Multifractal analysis of managed and independent float exchange rates

    Science.gov (United States)

    Stošić, Darko; Stošić, Dusan; Stošić, Tatijana; Stanley, H. Eugene

    2015-06-01

    We investigate multifractal properties of daily price changes in currency rates using the multifractal detrended fluctuation analysis (MF-DFA). We analyze managed and independent floating currency rates in eight countries, and determine the changes in multifractal spectrum when transitioning between the two regimes. We find that after the transition from managed to independent float regime the changes in multifractal spectrum (position of maximum and width) indicate an increase in market efficiency. The observed changes are more pronounced for developed countries that have a well established trading market. After shuffling the series, we find that the multifractality is due to both probability density function and long term correlations for managed float regime, while for independent float regime multifractality is in most cases caused by broad probability density function.

  4. Every plane graph of maximum degree 8 has an edge-face 9-colouring.

    NARCIS (Netherlands)

    R.J. Kang (Ross); J.-S. Sereni; M. Stehlík

    2011-01-01

    textabstractAn edge-face coloring of a plane graph with edge set $E$ and face set $F$ is a coloring of the elements of $E\\cup F$ such that adjacent or incident elements receive different colors. Borodin proved that every plane graph of maximum degree $\\Delta \\ge 10$ can be edge-face colored with

  5. The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis

    Directory of Open Access Journals (Sweden)

    Chen Yidong

    2004-01-01

    Full Text Available An unsupervised data clustering method, called the local maximum clustering (LMC method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the -mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999.

  6. Bounds and maximum principles for the solution of the linear transport equation

    International Nuclear Information System (INIS)

    Larsen, E.W.

    1981-01-01

    Pointwise bounds are derived for the solution of time-independent linear transport problems with surface sources in convex spatial domains. Under specified conditions, upper bounds are derived which, as a function of position, decrease with distance from the boundary. Also, sufficient conditions are obtained for the existence of maximum and minimum principles, and a counterexample is given which shows that such principles do not always exist

  7. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  8. Independence, institutionalization, death and treatment costs 18 months after rehabilitation of older people in two different primary health care settings.

    Science.gov (United States)

    Johansen, Inger; Lindbak, Morten; Stanghelle, Johan K; Brekke, Mette

    2012-11-14

    The optimal setting and content of primary health care rehabilitation of older people is not known. Our aim was to study independence, institutionalization, death and treatment costs 18 months after primary care rehabilitation of older people in two different settings. Eighteen months follow-up of an open, prospective study comparing the outcome of multi-disciplinary rehabilitation of older people, in a structured and intensive Primary care dedicated inpatient rehabilitation (PCDIR, n=202) versus a less structured and less intensive Primary care nursing home rehabilitation (PCNHR, n=100). 302 patients, disabled from stroke, hip-fracture, osteoarthritis and other chronic diseases, aged ≥65years, assessed to have a rehabilitation potential and being referred from general hospital or own residence. Primary: Independence, assessed by Sunnaas ADL Index(SI). Secondary: Hospital and short-term nursing home length of stay (LOS); institutionalization, measured by institutional residence rate; death; and costs of rehabilitation and care. Statistical tests: T-tests, Correlation tests, Pearson's χ2, ANCOVA, Regression and Kaplan-Meier analyses. Overall SI scores were 26.1 (SD 7.2) compared to 27.0 (SD 5.7) at the end of rehabilitation, a statistically, but not clinically significant reduction (p=0.003 95%CI(0.3-1.5)). The PCDIR patients scored 2.2points higher in SI than the PCNHR patients, adjusted for age, gender, baseline MMSE and SI scores (p=0.003, 95%CI(0.8-3.7)). Out of 49 patients staying >28 days in short-term nursing homes, PCNHR-patients stayed significantly longer than PCDIR-patients (mean difference 104.9 days, 95%CI(0.28-209.6), p=0.05). The institutionalization increased in PCNHR (from 12%-28%, p=0.001), but not in PCDIR (from 16.9%-19.3%, p= 0.45). The overall one year mortality rate was 9.6%. Average costs were substantially higher for PCNHR versus PCDIR. The difference per patient was 3528€ for rehabilitation (prehabilitation and care were 18702€ (=1

  9. Independence, institutionalization, death and treatment costs 18 months after rehabilitation of older people in two different primary health care settings

    Directory of Open Access Journals (Sweden)

    Johansen Inger

    2012-11-01

    Full Text Available Abstract Background The optimal setting and content of primary health care rehabilitation of older people is not known. Our aim was to study independence, institutionalization, death and treatment costs 18 months after primary care rehabilitation of older people in two different settings. Methods Eighteen months follow-up of an open, prospective study comparing the outcome of multi-disciplinary rehabilitation of older people, in a structured and intensive Primary care dedicated inpatient rehabilitation (PCDIR, n=202 versus a less structured and less intensive Primary care nursing home rehabilitation (PCNHR, n=100. Participants: 302 patients, disabled from stroke, hip-fracture, osteoarthritis and other chronic diseases, aged ≥65years, assessed to have a rehabilitation potential and being referred from general hospital or own residence. Outcome measures: Primary: Independence, assessed by Sunnaas ADL Index(SI. Secondary: Hospital and short-term nursing home length of stay (LOS; institutionalization, measured by institutional residence rate; death; and costs of rehabilitation and care. Statistical tests: T-tests, Correlation tests, Pearson’s χ2, ANCOVA, Regression and Kaplan-Meier analyses. Results Overall SI scores were 26.1 (SD 7.2 compared to 27.0 (SD 5.7 at the end of rehabilitation, a statistically, but not clinically significant reduction (p=0.003 95%CI(0.3-1.5. The PCDIR patients scored 2.2points higher in SI than the PCNHR patients, adjusted for age, gender, baseline MMSE and SI scores (p=0.003, 95%CI(0.8-3.7. Out of 49 patients staying >28 days in short-term nursing homes, PCNHR-patients stayed significantly longer than PCDIR-patients (mean difference 104.9 days, 95%CI(0.28-209.6, p=0.05. The institutionalization increased in PCNHR (from 12%-28%, p=0.001, but not in PCDIR (from 16.9%-19.3%, p= 0.45. The overall one year mortality rate was 9.6%. Average costs were substantially higher for PCNHR versus PCDIR. The difference per patient

  10. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  11. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  12. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    Purpose: Multivariate modeling of complications after radiotherapy is frequently used in conjunction with data driven variable selection. This study quantifies the risk of overfitting in a data driven modeling method using bootstrapping for data with typical clinical characteristics, and estimates the minimum amount of data needed to obtain models with relatively high predictive power. Materials and methods: To facilitate repeated modeling and cross-validation with independent datasets for the assessment of true predictive power, a method was developed to generate simulated data with statistical properties similar to real clinical data sets. Characteristics of three clinical data sets from radiotherapy treatment of head and neck cancer patients were used to simulate data with set sizes between 50 and 1000 patients. A logistic regression method using bootstrapping and forward variable selection was used for complication modeling, resulting for each simulated data set in a selected number of variables and an estimated predictive power. The true optimal number of variables and true predictive power were calculated using cross-validation with very large independent data sets. Results: For all simulated data set sizes the number of variables selected by the bootstrapping method was on average close to the true optimal number of variables, but showed considerable spread. Bootstrapping is more accurate in selecting the optimal number of variables than the AIC and BIC alternatives, but this did not translate into a significant difference of the true predictive power. The true predictive power asymptotically converged toward a maximum predictive power for large data sets, and the estimated predictive power converged toward the true predictive power. More than half of the potential predictive power is gained after approximately 200 samples. Our simulations demonstrated severe overfitting (a predicative power lower than that of predicting 50% probability) in a number of small

  13. Hong–Ou–Mandel interference with two independent weak coherent states

    International Nuclear Information System (INIS)

    Chen Hua; An Xue-Bi; Wu Juan; Yin Zhen-Qiang; Wang Shuang; Chen Wei; Han Zhen-Fu

    2016-01-01

    Recently, the Hong–Ou–Mandel (HOM) interference between two independent weak coherent pulses (WCPs) has been paid much attention due to the measurement-device-independent (MDI) quantum key distribution (QKD). Using classical wave theory, articles reported before show that the visibility of this kind of HOM-type interference is ≤ 50%. In this work, we analyze this kind of interference using quantum optics, which reveals more details compared to the wave theory. Analyses confirm the maximum visibility of 50%. And we conclude that the maximum visibility of 50% comes from the two single-photon states in WCPs, without considering the noise. In the experiment, we successfully approach the visibility of 50% by using WCPs splitting from the single pico-second laser source and phase scanning. Since this kind of HOM interference is immune to slow phase fluctuations, both the realized and proposed experiment designs can provide stable ways of high-resolution optical distance detection. (paper)

  14. Rumor Identification with Maximum Entropy in MicroNet

    Directory of Open Access Journals (Sweden)

    Suisheng Yu

    2017-01-01

    Full Text Available The widely used applications of Microblog, WeChat, and other social networking platforms (that we call MicroNet shorten the period of information dissemination and expand the range of information dissemination, which allows rumors to cause greater harm and have more influence. A hot topic in the information dissemination field is how to identify and block rumors. Based on the maximum entropy model, this paper constructs the recognition mechanism of rumor information in the micronetwork environment. First, based on the information entropy theory, we obtained the characteristics of rumor information using the maximum entropy model. Next, we optimized the original classifier training set and the feature function to divide the information into rumors and nonrumors. Finally, the experimental simulation results show that the rumor identification results using this method are better than the original classifier and other related classification methods.

  15. Optimal control problems with delay, the maximum principle and necessary conditions

    NARCIS (Netherlands)

    Frankena, J.F.

    1975-01-01

    In this paper we consider a rather general optimal control problem involving ordinary differential equations with delayed arguments and a set of equality and inequality restrictions on state- and control variables. For this problem a maximum principle is given in pointwise form, using variational

  16. Stable Chimeras and Independently Synchronizable Clusters

    Science.gov (United States)

    Cho, Young Sul; Nishikawa, Takashi; Motter, Adilson E.

    2017-08-01

    Cluster synchronization is a phenomenon in which a network self-organizes into a pattern of synchronized sets. It has been shown that diverse patterns of stable cluster synchronization can be captured by symmetries of the network. Here, we establish a theoretical basis to divide an arbitrary pattern of symmetry clusters into independently synchronizable cluster sets, in which the synchronization stability of the individual clusters in each set is decoupled from that in all the other sets. Using this framework, we suggest a new approach to find permanently stable chimera states by capturing two or more symmetry clusters—at least one stable and one unstable—that compose the entire fully symmetric network.

  17. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  18. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  19. J/ψ+χcJ production at the B factories under the principle of maximum conformality

    International Nuclear Information System (INIS)

    Wang, Sheng-Quan; Wu, Xing-Gang; Zheng, Xu-Chang; Shen, Jian-Ming; Zhang, Qiong-Lian

    2013-01-01

    Under the conventional scale setting, the renormalization scale uncertainty usually constitutes a systematic error for a fixed-order perturbative QCD estimation. The recently suggested principle of maximum conformality (PMC) provides a principle to eliminate such scale ambiguity in a step-by-step way. Using the PMC, all non-conformal terms in perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. In the paper, we make a detailed PMC analysis for both the polarized and the unpolarized cross sections for the double charmonium production process, e + +e − →J/ψ(ψ ′ )+χ cJ with (J=0,1,2). The running behavior for the coupling constant, governed by the PMC scales, are determined exactly for the specific processes. We compare our predictions with the measurements at the B factories, BaBar and Belle, and the theoretical estimations obtained in the literature. Because the non-conformal terms are different for various polarized and unpolarized cross sections, the PMC scales of these cross sections are different in principle. It is found that all the PMC scales are almost independent of the initial choice of renormalization scale. Thus, the large renormalization scale uncertainty usually adopted in the literature up to ∼40% at the NLO level, obtained from the conventional scale setting, for both the polarized and the unpolarized cross sections are greatly suppressed. It is found that the charmonium production is dominated by J=0 channel. After PMC scale setting, we obtain σ(J/ψ+χ c0 )=12.25 −3.13 +3.70 fb and σ(ψ ′ +χ c0 )=5.23 −1.32 +1.56 fb, where the squared average errors are caused by bound state parameters as m c , |R J/ψ (0)| and |R χ cJ ′ (0)|, which are non-perturbative error sources in different to the QCD scale setting problem. In comparison to the experimental data, a more accurate theoretical estimation shall be helpful for a precise

  20. The maximum economic depth of groundwater abstraction for irrigation

    Science.gov (United States)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of

  1. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.

    Science.gov (United States)

    Cherry, Joshua L

    2017-02-23

    Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.

  2. A maximum-principle preserving finite element method for scalar conservation equations

    KAUST Repository

    Guermond, Jean-Luc; Nazarov, Murtazo

    2014-01-01

    This paper introduces a first-order viscosity method for the explicit approximation of scalar conservation equations with Lipschitz fluxes using continuous finite elements on arbitrary grids in any space dimension. Provided the lumped mass matrix is positive definite, the method is shown to satisfy the local maximum principle under a usual CFL condition. The method is independent of the cell type; for instance, the mesh can be a combination of tetrahedra, hexahedra, and prisms in three space dimensions. © 2014 Elsevier B.V.

  3. A maximum-principle preserving finite element method for scalar conservation equations

    KAUST Repository

    Guermond, Jean-Luc

    2014-04-01

    This paper introduces a first-order viscosity method for the explicit approximation of scalar conservation equations with Lipschitz fluxes using continuous finite elements on arbitrary grids in any space dimension. Provided the lumped mass matrix is positive definite, the method is shown to satisfy the local maximum principle under a usual CFL condition. The method is independent of the cell type; for instance, the mesh can be a combination of tetrahedra, hexahedra, and prisms in three space dimensions. © 2014 Elsevier B.V.

  4. A 3'-coterminal nested set of independently transcribed mRNAs is generated during Berne virus replication

    International Nuclear Information System (INIS)

    Snijder, E.J.; Horzinek, M.C.; Spaan, W.J.

    1990-01-01

    By using poly(A)-selected RNA from Berne virus (BEV)-infected embryonic mule skin cells as a template, cDNA was prepared and cloned in plasmid pUC9. Recombinants covering a contiguous sequence of about 10 kilobases were identified. Northern (RNA) blot hybridizations with various restriction fragments from these clones showed that the five BEV mRNAs formed a 3'-coterminal nested set. Sequence analysis revealed the presence of four complete open reading frames of 4743, 699, 426, and 480 nucleotides, with initiation codons coinciding with the 5' ends of BEV RNAs 2 through 5, respectively. By using primer extension analysis and oligonucleotide hybridizations, RNA 5 was found to be contiguous on the consensus sequence. The transcription of BEV mRNAs was studied by means of UV mapping. BEV RNAs 1, 2, and 3 were shown to be transcribed independently, which is also likely--although not rigorously proven--for RNAs 4 and 5. Upstream of the AUG codon of each open reading frame a conserved sequence pattern was observed which is postulated to function as a core promoter sequence in subgenomic RNA transcription. In the area surrounding the core promoter region of the two most abundant subgenomic BEV RNAs, a number of homologous sequence motifs were identified

  5. The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum Allowable Charges and the Center for Medicare and Medicaid Services: An Academic Approach

    Science.gov (United States)

    2005-04-29

    To) 29-04-2005 Final Report July 2004 to July 2005 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER The appli’eation of an army prospective payment model structured...Z39.18 Prospective Payment Model 1 The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum...Health Care Administration 20060315 090 Prospective Payment Model 2 Acknowledgments I would like to acknowledge my wife, Karen, who allowed me the

  6. Twenty-five years of maximum-entropy principle

    Science.gov (United States)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  7. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  8. Independent monitor unit calculation for intensity modulated radiotherapy using the MIMiC multileaf collimator

    International Nuclear Information System (INIS)

    Chen Zhe; Xing Lei; Nath, Ravinder

    2002-01-01

    A self-consistent monitor unit (MU) and isocenter point-dose calculation method has been developed that provides an independent verification of the MU for intensity modulated radiotherapy (IMRT) using the MIMiC (Nomos Corporation) multileaf collimator. The method takes into account two unique features of IMRT using the MIMiC: namely the gantry-dynamic arc delivery of intensity modulated photon beams and the slice-by-slice dose delivery for large tumor volumes. The method converts the nonuniform beam intensity planned at discrete gantry angles of 5 deg. or 10 deg. into conventional nonmodulated beam intensity apertures of elemental arc segments of 1 deg. This approach more closely simulates the actual gantry-dynamic arc delivery by MIMiC. Because each elemental arc segment is of uniform intensity, the MU calculation for an IMRT arc is made equivalent to a conventional arc with gantry-angle dependent beam apertures. The dose to the isocenter from each 1 deg. elemental arc segment is calculated by using the Clarkson scatter summation technique based on measured tissue-maximum-ratio and output factors, independent of the dose calculation model used in the IMRT planning system. For treatments requiring multiple treatment slices, the MU for the arc at each treatment slice takes into account the MU, leakage and scatter doses from other slices. This is achieved by solving a set of coupled linear equations for the MUs of all involved treatment slices. All input dosimetry data for the independent MU/isocenter point-dose calculation are measured directly. Comparison of the MU and isocenter point dose calculated by the independent program to those calculated by the Corvus planning system and to direct measurements has shown good agreement with relative difference less than ±3%. The program can be used as an independent initial MU verification for IMRT plans using the MIMiC multileaf collimators

  9. The η{sub c} decays into light hadrons using the principle of maximum conformality

    Energy Technology Data Exchange (ETDEWEB)

    Du, Bo-Lun; Wu, Xing-Gang; Zeng, Jun; Bu, Shi; Shen, Jian-Ming [Chongqing University, Department of Physics, Chongqing (China)

    2018-01-15

    In the paper, we analyze the η{sub c} decays into light hadrons at the next-to-leading order QCD corrections by applying the principle of maximum conformality (PMC). The relativistic correction at the O(α{sub s}v{sup 2})-order level has been included in the discussion, which gives about 10% contribution to the ratio R. The PMC, which satisfies the renormalization group invariance, is designed to obtain a scale-fixed and scheme-independent prediction at any fixed order. To avoid the confusion of treating n{sub f}-terms, we transform the usual MS pQCD series into the one under the minimal momentum space subtraction scheme. To compare with the prediction under conventional scale setting, R{sub Conv,mMOM-r} = (4.12{sup +0.30}{sub -0.28}) x 10{sup 3}, after applying the PMC, we obtain R{sub PMC,mMOM-r} = (6.09{sup +0.62}{sub -0.55}) x 10{sup 3}, where the errors are squared averages of the ones caused by m{sub c} and Λ{sub mMOM}. The PMC prediction agrees with the recent PDG value within errors, i.e. R{sup exp} = (6.3 ± 0.5) x 10{sup 3}. Thus we think the mismatching of the prediction under conventional scale-setting with the data is due to improper choice of scale, which however can be solved by using the PMC. (orig.)

  10. Linear intra-bone geometry dependencies of the radius: Radius length determination by maximum distal width

    International Nuclear Information System (INIS)

    Baumbach, S.F.; Krusche-Mandl, I.; Huf, W.; Mall, G.; Fialka, C.

    2012-01-01

    Purpose: The aim of the study was to investigate possible linear intra-bone geometry dependencies by determining the relation between the maximum radius length and maximum distal width in two independent populations and test for possible gender or age effects. A strong correlation can help develop more representative fracture models and osteosynthetic devices as well as aid gender and height estimation in anthropologic/forensic cases. Methods: First, maximum radius length and distal width of 100 consecutive patients, aged 20–70 years, were digitally measured on standard lower arm radiographs by two independent investigators. Second, the same measurements were performed ex vivo on a second cohort, 135 isolated, formalin fixed radii. Standard descriptive statistics as well as correlations were calculated and possible gender age influences tested for both populations separately. Results: The radiographic dataset resulted in a correlation of radius length and width of r = 0.753 (adj. R 2 = 0.563, p 2 = 0.592) and side no influence on the correlation. Radius length–width correlation for the isolated radii was r = 0.621 (adj. R 2 = 0.381, p 2 = 0.598). Conclusion: A relatively strong radius length–distal width correlation was found in two different populations, indicating that linear body proportions might not only apply to body height and axial length measurements of long bones but also to proportional dependency of bone shapes in general.

  11. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    Science.gov (United States)

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  13. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  14. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  15. Near-maximum-power-point-operation (nMPPO) design of photovoltaic power generation system

    Energy Technology Data Exchange (ETDEWEB)

    Huang, B.J.; Sun, F.S.; Ho, R.W. [Department of Mechanical Engineering, National Taiwan University, Taipei 106, Taiwan (China)

    2006-08-15

    The present study proposes a PV system design, called 'near-maximum power-point-operation' (nMPPO) that can maintain the performance very close to PV system with MPPT (maximum-power-point tracking) but eliminate hardware of the MPPT. The concept of nMPPO is to match the design of battery bank voltage V{sub set} with the MPP (maximum-power point) of the PV module based on an analysis using meteorological data. Three design methods are used in the present study to determine the optimal V{sub set}. The analytical results show that nMPPO is feasible and the optimal V{sub set} falls in the range 13.2-15.0V for MSX60 PV module. The long-term performance simulation shows that the overall nMPPO efficiency {eta}{sub nMPPO} is higher than 94%. Two outdoor field tests were carried out in the present study to verify the design of nMPPO. The test results for a single PV module (60Wp) indicate that the nMPPO efficiency {eta}{sub nMPPO} is mostly higher than 93% at various PV temperature T{sub pv}. Another long-term field test of 1kWp PV array using nMPPO shows that the power generation using nMPPO is almost identical with MPPT at various weather conditions and T{sub pv} variation from 24{sup o}C to 70{sup o}C. (author)

  16. Ontology-based geographic data set integration

    NARCIS (Netherlands)

    Uitermark, H.T.J.A.; Uitermark, Harry T.; Oosterom, Peter J.M.; Mars, Nicolaas; Molenaar, Martien; Molenaar, M.

    1999-01-01

    In order to develop a system to propagate updates we investigate the semantic and spatial relationships between independently produced geographic data sets of the same region (data set integration). The goal of this system is to reduce operator intervention in update operations between corresponding

  17. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  18. Set theory and logic

    CERN Document Server

    Stoll, Robert R

    1979-01-01

    Set Theory and Logic is the result of a course of lectures for advanced undergraduates, developed at Oberlin College for the purpose of introducing students to the conceptual foundations of mathematics. Mathematics, specifically the real number system, is approached as a unity whose operations can be logically ordered through axioms. One of the most complex and essential of modern mathematical innovations, the theory of sets (crucial to quantum mechanics and other sciences), is introduced in a most careful concept manner, aiming for the maximum in clarity and stimulation for further study in

  19. Multi-Temporal Independent Component Analysis and Landsat 8 for Delineating Maximum Extent of the 2013 Colorado Front Range Flood

    Directory of Open Access Journals (Sweden)

    Stephen M. Chignell

    2015-07-01

    Full Text Available Maximum flood extent—a key data need for disaster response and mitigation—is rarely quantified due to storm-related cloud cover and the low temporal resolution of optical sensors. While change detection approaches can circumvent these issues through the identification of inundated land and soil from post-flood imagery, their accuracy can suffer in the narrow and complex channels of increasingly developed and heterogeneous floodplains. This study explored the utility of the Operational Land Imager (OLI and Independent Component Analysis (ICA for addressing these challenges in the unprecedented 2013 Flood along the Colorado Front Range, USA. Pre- and post-flood images were composited and transformed with an ICA to identify change classes. Flooded pixels were extracted using image segmentation, and the resulting flood layer was refined with cloud and irrigated agricultural masks derived from the ICA. Visual assessment against aerial orthophotography showed close agreement with high water marks and scoured riverbanks, and a pixel-to-pixel validation with WorldView-2 imagery captured near peak flow yielded an overall accuracy of 87% and Kappa of 0.73. Additional tests showed a twofold increase in flood class accuracy over the commonly used modified normalized water index. The approach was able to simultaneously distinguish flood-related water and soil moisture from pre-existing water bodies and other spectrally similar classes within the narrow and braided channels of the study site. This was accomplished without the use of post-processing smoothing operations, enabling the important preservation of nuanced inundation patterns. Although flooding beneath moderate and sparse riparian vegetation canopy was captured, dense vegetation cover and paved regions of the floodplain were main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the flood edge. Nevertheless, the unsupervised nature of ICA

  20. Identification of a robust gene signature that predicts breast cancer outcome in independent data sets

    International Nuclear Information System (INIS)

    Korkola, James E; Waldman, Frederic M; Blaveri, Ekaterina; DeVries, Sandy; Moore, Dan H II; Hwang, E Shelley; Chen, Yunn-Yi; Estep, Anne LH; Chew, Karen L; Jensen, Ronald H

    2007-01-01

    Breast cancer is a heterogeneous disease, presenting with a wide range of histologic, clinical, and genetic features. Microarray technology has shown promise in predicting outcome in these patients. We profiled 162 breast tumors using expression microarrays to stratify tumors based on gene expression. A subset of 55 tumors with extensive follow-up was used to identify gene sets that predicted outcome. The predictive gene set was further tested in previously published data sets. We used different statistical methods to identify three gene sets associated with disease free survival. A fourth gene set, consisting of 21 genes in common to all three sets, also had the ability to predict patient outcome. To validate the predictive utility of this derived gene set, it was tested in two published data sets from other groups. This gene set resulted in significant separation of patients on the basis of survival in these data sets, correctly predicting outcome in 62–65% of patients. By comparing outcome prediction within subgroups based on ER status, grade, and nodal status, we found that our gene set was most effective in predicting outcome in ER positive and node negative tumors. This robust gene selection with extensive validation has identified a predictive gene set that may have clinical utility for outcome prediction in breast cancer patients

  1. The calculation of maximum permissible exposure levels for laser radiation

    International Nuclear Information System (INIS)

    Tozer, B.A.

    1979-01-01

    The maximum permissible exposure data of the revised standard BS 4803 are presented as a set of decision charts which ensure that the user automatically takes into account such details as pulse length and pulse pattern, limiting angular subtense, combinations of multiple wavelength and/or multiple pulse lengths, etc. The two decision charts given are for the calculation of radiation hazards to skin and eye respectively. (author)

  2. Translating Research into Classroom Practice: Workplace Independence for Students with Severe Handicaps.

    Science.gov (United States)

    Hughes, Carolyn; And Others

    1989-01-01

    The article describes a process for use in high-school transition programs to promote student independence within the context of vocational training. Strategies described include: evaluating student independence in community-based settings, teaching student adaptability, and transferring control of student independence to work-related stimuli. A…

  3. The ASIND-MEPhI library of independent actinide fission product yields

    International Nuclear Information System (INIS)

    Bogomolova, E.S.; Grashin, A.F.; Efimenko, A.D.; Lukasevich, I.B.

    1997-01-01

    This data base of independent fission product yields has been set up at the Moscow Engineering Physics Institute on the basis of theoretical calculations within the framework of the super-nonequilibrium thermodynamic model. The database consists of independent yield sets for 1163 fission products in the wide range of fissile nuclides from thorium-229 to fermium-257 with excitation energies up to 20 MeV. The use of the theoretical model made it possible to raise the accuracy of prediction for poorly explored fission reactions. The number of yield sets is larger than in the ENDF/B. For example, photofission product yields are included in the ASIND-MEPhI database as virtual sets. (author). 14 refs, 17 figs, 2 tabs

  4. Dinosaur Metabolism and the Allometry of Maximum Growth Rate.

    Science.gov (United States)

    Myhrvold, Nathan P

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued.

  5. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    Science.gov (United States)

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued. PMID:27828977

  6. Mid-depth temperature maximum in an estuarine lake

    Science.gov (United States)

    Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.

    2018-03-01

    The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.

  7. Assessing suitable area for Acacia dealbata Mill. in the Ceira River Basin (Central Portugal based on maximum entropy modelling approach

    Directory of Open Access Journals (Sweden)

    Jorge Pereira

    2015-12-01

    Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.

  8. Edge Cut Domination, Irredundance, and Independence in Graphs

    OpenAIRE

    Fenstermacher, Todd; Hedetniemi, Stephen; Laskar, Renu

    2016-01-01

    An edge dominating set $F$ of a graph $G=(V,E)$ is an \\textit{edge cut dominating set} if the subgraph $\\langle V,G-F \\rangle$ is disconnected. The \\textit{edge cut domination number} $\\gamma_{ct}(G)$ of $G$ is the minimum cardinality of an edge cut dominating set of $G.$ In this paper we study the edge cut domination number and investigate its relationships with other parameters of graphs. We also introduce the properties edge cut irredundance and edge cut independence.

  9. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  10. Leadership set-up

    DEFF Research Database (Denmark)

    Thude, Bettina Ravnborg; Stenager, Egon; von Plessen, Christian

    2018-01-01

    . Findings: The study found that the leadership set-up did not have any clear influence on interdisciplinary cooperation, as all wards had a high degree of interdisciplinary cooperation independent of which leadership set-up they had. Instead, the authors found a relation between leadership set-up and leader...... could influence legitimacy. Originality/value: The study shows that leadership set-up is not the predominant factor that creates interdisciplinary cooperation; but rather, leader legitimacy also should be considered. Additionally, the study shows that leader legitimacy can be difficult to establish...... and that it cannot be taken for granted. This is something chief executive officers should bear in mind when they plan and implement new leadership structures. Therefore, it would also be useful to look more closely at how to achieve legitimacy in cases where the leader is from a different profession to the staff....

  11. Identification of "ever-cropped" land (1984-2010) using Landsat annual maximum NDVI image composites: Southwestern Kansas case study.

    Science.gov (United States)

    Maxwell, Susan K; Sylvester, Kenneth M

    2012-06-01

    A time series of 230 intra- and inter-annual Landsat Thematic Mapper images was used to identify land that was ever cropped during the years 1984 through 2010 for a five county region in southwestern Kansas. Annual maximum Normalized Difference Vegetation Index (NDVI) image composites (NDVI(ann-max)) were used to evaluate the inter-annual dynamics of cropped and non-cropped land. Three feature images were derived from the 27-year NDVI(ann-max) image time series and used in the classification: 1) maximum NDVI value that occurred over the entire 27 year time span (NDVI(max)), 2) standard deviation of the annual maximum NDVI values for all years (NDVI(sd)), and 3) standard deviation of the annual maximum NDVI values for years 1984-1986 (NDVI(sd84-86)) to improve Conservation Reserve Program land discrimination.Results of the classification were compared to three reference data sets: County-level USDA Census records (1982-2007) and two digital land cover maps (Kansas 2005 and USGS Trends Program maps (1986-2000)). Area of ever-cropped land for the five counties was on average 11.8 % higher than the area estimated from Census records. Overall agreement between the ever-cropped land map and the 2005 Kansas map was 91.9% and 97.2% for the Trends maps. Converting the intra-annual Landsat data set to a single annual maximum NDVI image composite considerably reduced the data set size, eliminated clouds and cloud-shadow affects, yet maintained information important for discriminating cropped land. Our results suggest that Landsat annual maximum NDVI image composites will be useful for characterizing land use and land cover change for many applications.

  12. Design, implementation and evaluation of an independent real-time safety layer for medical robotic systems using a force-torque-acceleration (FTA) sensor.

    Science.gov (United States)

    Richter, Lars; Bruder, Ralf

    2013-05-01

    Most medical robotic systems require direct interaction or contact with the robot. Force-Torque (FT) sensors can easily be mounted to the robot to control the contact pressure. However, evaluation is often done in software, which leads to latencies. To overcome that, we developed an independent safety system, named FTA sensor, which is based on an FT sensor and an accelerometer. An embedded system (ES) runs a real-time monitoring system for continuously checking of the readings. In case of a collision or error, it instantaneously stops the robot via the robot's external emergency stop. We found that the ES implementing the FTA sensor has a maximum latency of [Formula: see text] ms to trigger the robot's emergency stop. For the standard settings in the application of robotized transcranial magnetic stimulation, the robot will stop after at most 4 mm. Therefore, it works as an independent safety layer preventing patient and/or operator from serious harm.

  13. LensEnt2: Maximum-entropy weak lens reconstruction

    Science.gov (United States)

    Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.

    2013-08-01

    LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

  14. Are Independent Probes Truly Independent?

    Science.gov (United States)

    Camp, Gino; Pecher, Diane; Schmidt, Henk G.; Zeelenberg, Rene

    2009-01-01

    The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval…

  15. Measure of functional independence dominates discharge outcome prediction after inpatient rehabilitation for stroke.

    Science.gov (United States)

    Brown, Allen W; Therneau, Terry M; Schultz, Billie A; Niewczyk, Paulette M; Granger, Carl V

    2015-04-01

    Identifying clinical data acquired at inpatient rehabilitation admission for stroke that accurately predict key outcomes at discharge could inform the development of customized plans of care to achieve favorable outcomes. The purpose of this analysis was to use a large comprehensive national data set to consider a wide range of clinical elements known at admission to identify those that predict key outcomes at rehabilitation discharge. Sample data were obtained from the Uniform Data System for Medical Rehabilitation data set with the diagnosis of stroke for the years 2005 through 2007. This data set includes demographic, administrative, and medical variables collected at admission and discharge and uses the FIM (functional independence measure) instrument to assess functional independence. Primary outcomes of interest were functional independence measure gain, length of stay, and discharge to home. The sample included 148,367 people (75% white; mean age, 70.6±13.1 years; 97% with ischemic stroke) admitted to inpatient rehabilitation a mean of 8.2±12 days after symptom onset. The total functional independence measure score, the functional independence measure motor subscore, and the case-mix group were equally the strongest predictors for any of the primary outcomes. The most clinically relevant 3-variable model used the functional independence measure motor subscore, age, and walking distance at admission (r(2)=0.107). No important additional effect for any other variable was detected when added to this model. This analysis shows that a measure of functional independence in motor performance and age at rehabilitation hospital admission for stroke are predominant predictors of outcome at discharge in a uniquely large US national data set. © 2015 American Heart Association, Inc.

  16. Early visual cortex reflects initiation and maintenance of task set

    Science.gov (United States)

    Elkhetali, Abdurahman S.; Vaden, Ryan J.; Pool, Sean M.

    2014-01-01

    The human brain is able to process information flexibly, depending on a person's task. The mechanisms underlying this ability to initiate and maintain a task set are not well understood, but they are important for understanding the flexibility of human behavior and developing therapies for disorders involving attention. Here we investigate the differential roles of early visual cortical areas in initiating and maintaining a task set. Using functional Magnetic Resonance Imaging (fMRI), we characterized three different components of task set-related, but trial-independent activity in retinotopically mapped areas of early visual cortex, while human participants performed attention demanding visual or auditory tasks. These trial-independent effects reflected: (1) maintenance of attention over a long duration, (2) orienting to a cue, and (3) initiation of a task set. Participants performed tasks that differed in the modality of stimulus to be attended (auditory or visual) and in whether there was a simultaneous distractor (auditory only, visual only, or simultaneous auditory and visual). We found that patterns of trial-independent activity in early visual areas (V1, V2, V3, hV4) depend on attended modality, but not on stimuli. Further, different early visual areas play distinct roles in the initiation of a task set. In addition, activity associated with maintaining a task set tracks with a participant's behavior. These results show that trial-independent activity in early visual cortex reflects initiation and maintenance of a person's task set. PMID:25485712

  17. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data

    Directory of Open Access Journals (Sweden)

    Shanshan eLi

    2016-01-01

    Full Text Available Independent Component analysis (ICA is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks.

  18. On the path independence conditions for discrete-continuous demand

    DEFF Research Database (Denmark)

    Batley, Richard; Ibáñez Rivas, Juan Nicolás

    2013-01-01

    We consider the manner in which the well-established path independence conditions apply to Small and Rosen's (1981) problem of discrete-continuous demand, focussing especially upon the restricted case of discrete choice (probabilistic) demand. We note that the consumer surplus measure promoted...... by Small and Rosen, which is specific to the probabilistic demand, imposes path independence to price changes a priori. We find that path independence to income changes can further be imposed provided a numeraire good is available in the consumption set. We show that, for practical purposes, Mc...

  19. Future independent power generation and implications for instruments and controls

    International Nuclear Information System (INIS)

    Williams, J.H.

    1991-01-01

    This paper reports that the independent power producers market is comprised of cogeneration, small power generation, and independent power production (IPP) segments. Shortfalls in future electric supply are expected to lead to significant growth in this market. The opportunities for instruments and controls will shift from traditional electric utility applications to the independent power market with a more diverse set of needs. Importance will be placed on system reliability, quality of power and increased demand for clean kWh

  20. Solving the oil independence problem: Is it possible?

    International Nuclear Information System (INIS)

    Sovacool, Benjamin K.

    2007-01-01

    As currently discussed in political circles, oil independence is unattainable - lacking coherent meaning and wedding policymakers to the notion that they can never accomplish it. Contrary to this thinking, more than a dozen different sets of technologies and practices could increase domestic supply and reduce demand for oil to the point of making the US functionally independent from oil price shocks. However, achieving this goal demands concerted action to expand and diversify conventional domestic oil supplies, reduce overall demand in the transportation and buildings sector, and continue to develop alternative fuels. If policymakers undertook such actions today, the US could become oil independent by 2030. (author)

  1. Signal-dependent independent component analysis by tunable mother wavelets

    International Nuclear Information System (INIS)

    Seo, Kyung Ho

    2006-02-01

    The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown

  2. Independent component analysis in non-hypothesis driven metabolomics

    DEFF Research Database (Denmark)

    Li, Xiang; Hansen, Jakob; Zhao, Xinjie

    2012-01-01

    In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...... information on the investigated process and moreover can separate statistically independent source signals with non-Gaussian distribution, we aimed to elucidate the analytical power of ICA for the metabolic pattern analysis and the identification of key metabolites in this exercise study. A novel approach...... based on descriptive statistics was established to optimize ICA model. In the GC-TOF MS data set the number of principal components after whitening and the number of independent components of ICA were optimized and systematically selected by descriptive statistics. The elucidated dominating independent...

  3. Measurement of in-bore side loads and comparison to first maximum yaw

    Directory of Open Access Journals (Sweden)

    Donald E. Carlucci

    2016-04-01

    Full Text Available In-bore yaw of a projectile in a gun tube has been shown to result in range loss if the yaw is significant. An attempt was made to determine if relationships between in-bore yaw and projectile First Maximum Yaw (FMY were observable. Experiments were conducted in which pressure transducers were mounted near the muzzle of a 155 mm cannon in three sets of four. Each set formed a cruciform pattern to obtain a differential pressure across the projectile. These data were then integrated to form a picture of what the overall pressure distribution was along the side of the projectile. The pressure distribution was used to determine a magnitude and direction of the overturning moment acting on the projectile. This moment and its resulting angular acceleration were then compared to the actual first maximum yaw observed in the test. The degree of correlation was examined using various statistical techniques. Overall uncertainty in the projectile dynamics was between 20% and 40% of the mean values of FMY.

  4. Estimating Probable Maximum Precipitation by Considering Combined Effect of Typhoon and Southwesterly Air Flow

    Directory of Open Access Journals (Sweden)

    Cheng-Chin Liu

    2016-01-01

    Full Text Available Typhoon Morakot hit southern Taiwan in 2009, bringing 48-hr of heavy rainfall [close to the Probable Maximum Precipitation (PMP] to the Tsengwen Reservoir catchment. This extreme rainfall event resulted from the combined (co-movement effect of two climate systems (i.e., typhoon and southwesterly air flow. Based on the traditional PMP estimation method (i.e., the storm transposition method, STM, two PMP estimation approaches, i.e., Amplification Index (AI and Independent System (IS approaches, which consider the combined effect are proposed in this work. The AI approach assumes that the southwesterly air flow precipitation in a typhoon event could reach its maximum value. The IS approach assumes that the typhoon and southwesterly air flow are independent weather systems. Based on these assumptions, calculation procedures for the two approaches were constructed for a case study on the Tsengwen Reservoir catchment. The results show that the PMP estimates for 6- to 60-hr durations using the two approaches are approximately 30% larger than the PMP estimates using the traditional STM without considering the combined effect. This work is a pioneer PMP estimation method that considers the combined effect of a typhoon and southwesterly air flow. Further studies on this issue are essential and encouraged.

  5. Architecture-independent power bound for vibration energy harvesters

    International Nuclear Information System (INIS)

    Halvorsen, E; Le, C P; Mitcheson, P D; Yeatman, E M

    2013-01-01

    The maximum output power of energy harvesters driven by harmonic vibrations is well known for a range of specific harvester architectures. An architecture-independent bound based on the mechanical input-power also exists and gives a strict limit on achievable power with one mechanical degree of freedom, but is a least upper bound only for lossless devices. We report a new theoretical bound on the output power of vibration energy harvesters that includes parasitic, linear mechanical damping while still being architecture independent. This bound greatly improves the previous bound at moderate force amplitudes and is compared to the performance of established harvester architectures which are shown to agree with it in limiting cases. The bound is a hard limit on achievable power with one mechanical degree of freedom and can not be circumvented by transducer or power-electronic-interface design

  6. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  7. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  8. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  9. States of maximum polarization for a quantum light field and states of a maximum sensitivity in quantum interferometry

    International Nuclear Information System (INIS)

    Peřinová, Vlasta; Lukš, Antonín

    2015-01-01

    The SU(2) group is used in two different fields of quantum optics, the quantum polarization and quantum interferometry. Quantum degrees of polarization may be based on distances of a polarization state from the set of unpolarized states. The maximum polarization is achieved in the case where the state is pure and then the distribution of the photon-number sums is optimized. In quantum interferometry, the SU(2) intelligent states have also the property that the Fisher measure of information is equal to the inverse minimum detectable phase shift on the usual simplifying condition. Previously, the optimization of the Fisher information under a constraint was studied. Now, in the framework of constraint optimization, states similar to the SU(2) intelligent states are treated. (paper)

  10. Prediction of the Maximum Number of Repetitions and Repetitions in Reserve From Barbell Velocity.

    Science.gov (United States)

    García-Ramos, Amador; Torrejón, Alejandro; Feriche, Belén; Morales-Artacho, Antonio J; Pérez-Castilla, Alejandro; Padial, Paulino; Haff, Guy Gregory

    2018-03-01

    To provide 2 general equations to estimate the maximum possible number of repetitions (XRM) from the mean velocity (MV) of the barbell and the MV associated with a given number of repetitions in reserve, as well as to determine the between-sessions reliability of the MV associated with each XRM. After determination of the bench-press 1-repetition maximum (1RM; 1.15 ± 0.21 kg/kg body mass), 21 men (age 23.0 ± 2.7 y, body mass 72.7 ± 8.3 kg, body height 1.77 ± 0.07 m) completed 4 sets of as many repetitions as possible against relative loads of 60%1RM, 70%1RM, 80%1RM, and 90%1RM over 2 separate sessions. The different loads were tested in a randomized order with 10 min of rest between them. All repetitions were performed at the maximum intended velocity. Both the general equation to predict the XRM from the fastest MV of the set (CV = 15.8-18.5%) and the general equation to predict MV associated with a given number of repetitions in reserve (CV = 14.6-28.8%) failed to provide data with acceptable between-subjects variability. However, a strong relationship (median r 2  = .984) and acceptable reliability (CV  .85) were observed between the fastest MV of the set and the XRM when considering individual data. These results indicate that generalized group equations are not acceptable methods for estimating the XRM-MV relationship or the number of repetitions in reserve. When attempting to estimate the XRM-MV relationship, one must use individualized relationships to objectively estimate the exact number of repetitions that can be performed in a training set.

  11. BV solutions of rate independent differential inclusions

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Pavel; Recupero, V.

    2014-01-01

    Roč. 139, č. 4 (2014), s. 607-619 ISSN 0862-7959 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : differential inclusion * stop operator * rate independence * convex set Subject RIV: BA - General Mathematics http://hdl.handle.net/10338.dmlcz/144138

  12. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic

  13. Testing the assumption in ergonomics software that overall shoulder strength can be accurately calculated by treating orthopedic axes as independent.

    Science.gov (United States)

    Hodder, Joanne N; La Delfa, Nicholas J; Potvin, Jim R

    2016-08-01

    To predict shoulder strength, most current ergonomics software assume independence of the strengths about each of the orthopedic axes. Using this independent axis approach (IAA), the shoulder can be predicted to have strengths as high as the resultant of the maximum moment about any two or three axes. We propose that shoulder strength is not independent between axes, and propose an approach that calculates the weighted average (WAA) between the strengths of the axes involved in the demand. Fifteen female participants performed maximum isometric shoulder exertions with their right arm placed in a rigid adjustable brace affixed to a tri-axial load cell. Maximum exertions were performed in 24 directions, including four primary directions, horizontal flexion-extension, abduction-adduction, and at 15° increments in between those axes. Moments were computed and comparisons made between the experimentally collected strengths and those predicted by the IAA and WAA methods. The IAA over-predicted strength in 14 of 20 non-primary exertions directions, while the WAA underpredicted strength in only 2 of these directions. Therefore, it is not valid to assume that shoulder axes are independent when predicting shoulder strengths between two orthopedic axes, and the WAA is an improvement over current methods for the posture tested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  15. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  16. A novel maximum power point tracking method for PV systems using fuzzy cognitive networks (FCN)

    Energy Technology Data Exchange (ETDEWEB)

    Karlis, A.D. [Electrical Machines Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece); Kottas, T.L.; Boutalis, Y.S. [Automatic Control Systems Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece)

    2007-03-15

    Maximum power point trackers (MPPTs) play an important role in photovoltaic (PV) power systems because they maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency. This paper presents a novel MPPT method based on fuzzy cognitive networks (FCN). The new method gives a good maximum power operation of any PV array under different conditions such as changing insolation and temperature. The numerical results show the effectiveness of the proposed algorithm. (author)

  17. Maximum entropy principle and hydrodynamic models in statistical mechanics

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2012-01-01

    This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the

  18. Device-Independent Certification of a Nonprojective Qubit Measurement

    Science.gov (United States)

    Gómez, Esteban S.; Gómez, Santiago; González, Pablo; Cañas, Gustavo; Barra, Johanna F.; Delgado, Aldo; Xavier, Guilherme B.; Cabello, Adán; Kleinmann, Matthias; Vértesi, Tamás; Lima, Gustavo

    2016-12-01

    Quantum measurements on a two-level system can have more than two independent outcomes, and in this case, the measurement cannot be projective. Measurements of this general type are essential to an operational approach to quantum theory, but so far, the nonprojective character of a measurement can only be verified experimentally by already assuming a specific quantum model of parts of the experimental setup. Here, we overcome this restriction by using a device-independent approach. In an experiment on pairs of polarization-entangled photonic qubits we violate by more than 8 standard deviations a Bell-like correlation inequality that is valid for all sets of two-outcome measurements in any dimension. We combine this with a device-independent verification that the system is best described by two qubits, which therefore constitutes the first device-independent certification of a nonprojective quantum measurement.

  19. A practical approach for writer-dependent symbol recognition using a writer-independent symbol recognizer.

    Science.gov (United States)

    LaViola, Joseph J; Zeleznik, Robert C

    2007-11-01

    We present a practical technique for using a writer-independent recognition engine to improve the accuracy and speed while reducing the training requirements of a writer-dependent symbol recognizer. Our writer-dependent recognizer uses a set of binary classifiers based on the AdaBoost learning algorithm, one for each possible pairwise symbol comparison. Each classifier consists of a set of weak learners, one of which is based on a writer-independent handwriting recognizer. During online recognition, we also use the n-best list of the writer-independent recognizer to prune the set of possible symbols and thus reduce the number of required binary classifications. In this paper, we describe the geometric and statistical features used in our recognizer and our all-pairs classification algorithm. We also present the results of experiments that quantify the effect incorporating a writer-independent recognition engine into a writer-dependent recognizer has on accuracy, speed, and user training time.

  20. Explaining the judicial independence of international courts: a comparative analysis

    DEFF Research Database (Denmark)

    Beach, Derek

    What factors allow some international courts (ICs) to rule against the express preferences of powerful member states, whereas others routinely defer to governments? While judicial independence is not the only factor explaining the strength of a given international institution, it is a necessary...... condition. The paper first develops three sets of competing explanatory variables that potentially can explain variations in the judicial independence of ICs. The causal effects of these explanatory variables upon variance in judicial independence are investigated in a comparative analysis of the ACJ, ECJ...

  1. Device-independent bit commitment based on the CHSH inequality

    International Nuclear Information System (INIS)

    Aharon, N; Massar, S; Pironio, S; Silman, J

    2016-01-01

    Bit commitment and coin flipping occupy a unique place in the device-independent landscape, as the only device-independent protocols thus far suggested for these tasks are reliant on tripartite GHZ correlations. Indeed, we know of no other bipartite tasks, which admit a device-independent formulation, but which are not known to be implementable using only bipartite nonlocality. Another interesting feature of these protocols is that the pseudo-telepathic nature of GHZ correlations—in contrast to the generally statistical character of nonlocal correlations, such as those arising in the violation of the CHSH inequality—is essential to their formulation and analysis. In this work, we present a device-independent bit commitment protocol based on CHSH testing, which achieves the same security as the optimal GHZ-based protocol, albeit at the price of fixing the time at which Alice reveals her commitment. The protocol is analyzed in the most general settings, where the devices are used repeatedly and may have long-term quantum memory. We also recast the protocol in a post-quantum setting where both honest and dishonest parties are restricted only by the impossibility of signaling, and find that overall the supra-quantum structure allows for greater security. (paper)

  2. A representation independent propagator. Pt. 1. Compact Lie groups

    International Nuclear Information System (INIS)

    Tome, W.A.

    1995-01-01

    Conventional path integral expressions for propagators are representation dependent. Rather than having to adapt each propagator to the representation in question, it is shown that for compact Lie groups it is possible to introduce a propagator that is representation independent. For a given set of kinematical variables this propagator is a single function independent of any particular choice of fiducial vector, which monetheless, correctly propagates each element of the coherent state representation associated with these kinematical variables. Although the configuration space is in general curved, nevertheless the lattice phase-space path integral for the representation independent propagator has the form appropriate to flat space. To illustrate the general theory a representation independent propagator is explicitly constructed for the Lie group SU(2). (orig.)

  3. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  4. PREP KITT, System Reliability by Fault Tree Analysis. PREP, Min Path Set and Min Cut Set for Fault Tree Analysis, Monte-Carlo Method. KITT, Component and System Reliability Information from Kinetic Fault Tree Theory

    International Nuclear Information System (INIS)

    Vesely, W.E.; Narum, R.E.

    1997-01-01

    1 - Description of problem or function: The PREP/KITT computer program package obtains system reliability information from a system fault tree. The PREP program finds the minimal cut sets and/or the minimal path sets of the system fault tree. (A minimal cut set is a smallest set of components such that if all the components are simultaneously failed the system is failed. A minimal path set is a smallest set of components such that if all of the components are simultaneously functioning the system is functioning.) The KITT programs determine reliability information for the components of each minimal cut or path set, for each minimal cut or path set, and for the system. Exact, time-dependent reliability information is determined for each component and for each minimal cut set or path set. For the system, reliability results are obtained by upper bound approximations or by a bracketing procedure in which various upper and lower bounds may be obtained as close to one another as desired. The KITT programs can handle independent components which are non-repairable or which have a constant repair time. Any assortment of non-repairable components and components having constant repair times can be considered. Any inhibit conditions having constant probabilities of occurrence can be handled. The failure intensity of each component is assumed to be constant with respect to time. The KITT2 program can also handle components which during different time intervals, called phases, may have different reliability properties. 2 - Method of solution: The PREP program obtains minimal cut sets by either direct deterministic testing or by an efficient Monte Carlo algorithm. The minimal path sets are obtained using the Monte Carlo algorithm. The reliability information is obtained by the KITT programs from numerical solution of the simple integral balance equations of kinetic tree theory. 3 - Restrictions on the complexity of the problem: The PREP program will obtain the minimal cut and

  5. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  6. PELATIHAN LARI AEROBIK 400 METER TIGA REPETISI DUA SET DAN DUA REPETISI TIGA SET SELAMA 6 MINGGU SAMA-SAMA MENINGKATKAN KECEPATAN JALAN CEPAT 3000 METER SISWA KELAS VII SMPN 11 DENPASAR

    Directory of Open Access Journals (Sweden)

    Dixon E.M. Taek Bete

    2014-08-01

    Full Text Available Monitoring the author has been in SMP 11 Denpasar.They had never achieved maximum performance (champion in the number of brisk walking.Because by declining student motivation, student interest because it appears more inclined to form other sports, such as volleyball ball games, soccer, futsall, table tennis or badminton or perhaps training methods that do not follow the principles of training required. The purpose of this training is to determine an increase in the speed of a brisk walk 3000 meters against both treatment groups. This study used an experimental method. Population is taken from class VII SMP Denpasar. This study used an experimental method. Population is taken from class VII SMP Denpasar. Samples are 32 people drawn randomly from the population who meet the inclusion and exclusion criteria. The number of samples divided into two groups with each group consisting of 16 people. The training is done with a frequency 4 times a week for 6 weeks. The first group was given 400-meter aerobic training run two sets of three reps and group II aerobic training 400 meter run three sets of two reps. Data in the form of brisk walking 3000 meter results taken before and after training. The data obtained were tested using SPSS computer program. The data were normally distributed and homogeneous so that further tested using paired t-test to compare mean values ??before and after training between each group, while the independent t-test to determine differences in mean values ??between the two groups. Paired t-test results of Group I and Group II were significantly increased (p <0.05. The results of independent t-test found that the two groups before the training was not significant (p> 0.05, whereas both groups after training pace brisk walking are equally increased (p> 0.05. Conclusion that aerobic training 400 meter run two sets of three reps and three sets of two reps for 6 weeks together (not significant increase the speed of a brisk walk 3000

  7. ROC [Receiver Operating Characteristics] study of maximum likelihood estimator human brain image reconstructions in PET [Positron Emission Tomography] clinical practice

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Nolan, D.; Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J.

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of 18 F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab

  8. Maximum credible accident analysis for TR-2 reactor conceptual design

    International Nuclear Information System (INIS)

    Manopulo, E.

    1981-01-01

    A new reactor, TR-2, of 5 MW, designed in cooperation with CEN/GRENOBLE is under construction in the open pool of TR-1 reactor of 1 MW set up by AMF atomics at the Cekmece Nuclear Research and Training Center. In this report the fission product inventory and doses released after the maximum credible accident have been studied. The diffusion of the gaseous fission products to the environment and the potential radiation risks to the population have been evaluated

  9. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    Science.gov (United States)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i

  10. The Emotional Climate of the Interpersonal Classroom in a Maximum Security Prison for Males.

    Science.gov (United States)

    Meussling, Vonne

    1984-01-01

    Examines the nature, the task, and the impact of teaching in a maximum security prison for males. Data are presented concerning the curriculum design used in order to create a nonevaluative atmosphere. Inmates' reactions to self-disclosure and open communication in a prison setting are evaluated. (CT)

  11. Mechanical limits to maximum weapon size in a giant rhinoceros beetle.

    Science.gov (United States)

    McCullough, Erin L

    2014-07-07

    The horns of giant rhinoceros beetles are a classic example of the elaborate morphologies that can result from sexual selection. Theory predicts that sexual traits will evolve to be increasingly exaggerated until survival costs balance the reproductive benefits of further trait elaboration. In Trypoxylus dichotomus, long horns confer a competitive advantage to males, yet previous studies have found that they do not incur survival costs. It is therefore unlikely that horn size is limited by the theoretical cost-benefit equilibrium. However, males sometimes fight vigorously enough to break their horns, so mechanical limits may set an upper bound on horn size. Here, I tested this mechanical limit hypothesis by measuring safety factors across the full range of horn sizes. Safety factors were calculated as the ratio between the force required to break a horn and the maximum force exerted on a horn during a typical fight. I found that safety factors decrease with increasing horn length, indicating that the risk of breakage is indeed highest for the longest horns. Structural failure of oversized horns may therefore oppose the continued exaggeration of horn length driven by male-male competition and set a mechanical limit on the maximum size of rhinoceros beetle horns. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  12. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  13. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  14. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2015-01-01

    Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.

  15. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  16. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  17. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  18. Field dependence-independence and participation in physical activity by college students.

    Science.gov (United States)

    Liu, Wenhao

    2006-06-01

    Field-independent individuals, compared with field-dependent individuals, have higher sports potential and advantages in sport-related settings. Little research, however, has been conducted on the association of field dependence-independence and participation in physical activity. The study examined this association for college students who participated in physical activities in and beyond physical education classes. The Group Embedded Figures Test distinguished 40 field-dependent from 40 field-independent participants. Activity logs during one semester showed that field-independent participants were significantly more physically active and their physical activity behaviors were more sport-related than those of field-dependent participants.

  19. Enhancer of zeste homologue 2 plays an important role in neuroblastoma cell survival independent of its histone methyltransferase activity.

    Science.gov (United States)

    Bate-Eya, Laurel T; Gierman, Hinco J; Ebus, Marli E; Koster, Jan; Caron, Huib N; Versteeg, Rogier; Dolman, M Emmy M; Molenaar, Jan J

    2017-04-01

    Neuroblastoma is predominantly characterised by chromosomal rearrangements. Next to V-Myc Avian Myelocytomatosis Viral Oncogene Neuroblastoma Derived Homolog (MYCN) amplification, chromosome 7 and 17q gains are frequently observed. We identified a neuroblastoma patient with a regional 7q36 gain, encompassing the enhancer of zeste homologue 2 (EZH2) gene. EZH2 is the histone methyltransferase of lysine 27 of histone H3 (H3K27me3) that forms the catalytic subunit of the polycomb repressive complex 2. H3K27me3 is commonly associated with the silencing of genes involved in cellular processes such as cell cycle regulation, cellular differentiation and cancer. High EZH2 expression correlated with poor prognosis and overall survival independent of MYCN amplification status. Unexpectedly, treatment of 3 EZH2-high expressing neuroblastoma cell lines (IMR32, CHP134 and NMB), with EZH2-specific inhibitors (GSK126 and EPZ6438) resulted in only a slight G1 arrest, despite maximum histone methyltransferase activity inhibition. Furthermore, colony formation in cell lines treated with the inhibitors was reduced only at concentrations much higher than necessary for complete inhibition of EZH2 histone methyltransferase activity. Knockdown of the complete protein with three independent shRNAs resulted in a strong apoptotic response and decreased cyclin D1 levels. This apoptotic response could be rescued by overexpressing EZH2ΔSET, a truncated form of wild-type EZH2 lacking the SET transactivation domain necessary for histone methyltransferase activity. Our findings suggest that high EZH2 expression, at least in neuroblastoma, has a survival function independent of its methyltransferase activity. This important finding highlights the need for studies on EZH2 beyond its methyltransferase function and the requirement for compounds that will target EZH2 as a complete protein. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Independent component analysis using prior information for signal detection in a functional imaging system of the retina

    NARCIS (Netherlands)

    Barriga, E. Simon; Pattichis, Marios; Ts’o, Dan; Abramoff, Michael; Kardon, Randy; Kwon, Young; Soliz, Peter

    2011-01-01

    Independent component analysis (ICA) is a statistical technique that estimates a set of sources mixed by an unknown mixing matrix using only a set of observations. For this purpose, the only assumption is that the sources are statistically independent. In many applications, some information about

  1. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  2. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  3. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.

    Directory of Open Access Journals (Sweden)

    Richard R Stein

    2015-07-01

    Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.

  4. ON THE MAXIMUM MASS OF STELLAR BLACK HOLES

    International Nuclear Information System (INIS)

    Belczynski, Krzysztof; Fryer, Chris L.; Bulik, Tomasz; Ruiter, Ashley; Valsecchi, Francesca; Vink, Jorick S.; Hurley, Jarrod R.

    2010-01-01

    We present the spectrum of compact object masses: neutron stars and black holes (BHs) that originate from single stars in different environments. In particular, we calculate the dependence of maximum BH mass on metallicity and on some specific wind mass loss rates (e.g., Hurley et al. and Vink et al.). Our calculations show that the highest mass BHs observed in the Galaxy M bh ∼ 15 M sun in the high metallicity environment (Z = Z sun = 0.02) can be explained with stellar models and the wind mass loss rates adopted here. To reach this result we had to set luminous blue variable mass loss rates at the level of ∼10 -4 M sun yr -1 and to employ metallicity-dependent Wolf-Rayet winds. With such winds, calibrated on Galactic BH mass measurements, the maximum BH mass obtained for moderate metallicity (Z = 0.3 Z sun = 0.006) is M bh,max = 30 M sun . This is a rather striking finding as the mass of the most massive known stellar BH is M bh = 23-34 M sun and, in fact, it is located in a small star-forming galaxy with moderate metallicity. We find that in the very low (globular cluster-like) metallicity environment the maximum BH mass can be as high as M bh,max = 80 M sun (Z = 0.01 Z sun = 0.0002). It is interesting to note that X-ray luminosity from Eddington-limited accretion onto an 80 M sun BH is of the order of ∼10 40 erg s -1 and is comparable to luminosities of some known ultra-luminous X-ray sources. We emphasize that our results were obtained for single stars only and that binary interactions may alter these maximum BH masses (e.g., accretion from a close companion). This is strictly a proof-of-principle study which demonstrates that stellar models can naturally explain even the most massive known stellar BHs.

  5. The Business of DIY. Characteristics, motives and ideologies of micro-independent record labels.

    NARCIS (Netherlands)

    H.J.C.J. Hitters (Erik); R.P. den Drijver (Robin)

    2017-01-01

    markdownabstract_This paper examines micro-independent record companies, mostly set up by musicians according to the Do It Yourself (DIY) principle. They serve as distribution channels for counter-mainstream, often local, music. This paper discusses the characteristics of micro-(DIY) independents

  6. Maximum and minimum entropy states yielding local continuity bounds

    Science.gov (United States)

    Hanson, Eric P.; Datta, Nilanjana

    2018-04-01

    Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.

  7. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  8. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478

  9. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Degeneracy relations in QCD and the equivalence of two systematic all-orders methods for setting the renormalization scale

    Directory of Open Access Journals (Sweden)

    Huan-Yu Bi

    2015-09-01

    Full Text Available The Principle of Maximum Conformality (PMC eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I; the other, more recent, method (PMC-II uses the Rδ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfy all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio Re+e− and the Higgs partial width Γ(H→bb¯. Both methods lead to the same resummed (‘conformal’ series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {βi}-terms in the pQCD expansion are taken into account. We also show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.

  11. What is the effect of optimum independent parameters on solar heating systems?

    International Nuclear Information System (INIS)

    Kaçan, Erkan; Ulgen, Koray; Kaçan, Erdal

    2015-01-01

    Highlights: • The efficiency effect of 4 independent parameters over the solar heating system are examined. • 3 of 4 independent parameters are found as decisive parameter for system design. • Maximum exergetic efficiency exceeded 11% at optimized process. • Maximum environmental efficiency reached up to 95% at optimized process. • The optimum outside temperature and solar radiation are found as 22 °C and 773 W/m"2 for all responses. - Abstract: Researchers are rather closely involved in Solar Combisystems recently, but there is lack of study that presents the optimum design parameters. Therefore, in this study the influence of the four major variables, namely; outside, inside temperature, solar radiation on horizontal surface and instantaneous efficiency of solar collector on the energetic, exergetic and environmental efficiencies of Solar Combisystems are investigated and system optimization is done by a combination of response surface methodology. Measured parameters and energetic–exergetic and environmental performance curves are found and statistical model is created parallel with the actual data. It is found that statistical model is significant and all “lack-of-fit” values are non-significant. Thus, it is proved that statistical model strongly represents the design model. Outside temperature, solar radiation on horizontal surface and instantaneous efficiency of solar collector are the decisive parameters for all responses but instantaneous efficiency of solar collector is not for environmental efficiency. Maximum exergetic efficiency exceeded 11%, maximum environmental efficiency reached up to 95% at optimized process. The optimum value of the outside temperature and solar radiation are found as 22 °C and 773 W/m"2 for all responses, on the other hand optimum collector efficiency is found around 60% for energetic and exergetic efficiency values. Inside temperature is not a decisive parameter for all responses.

  12. Predecessor queries in dynamic integer sets

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    1997-01-01

    We consider the problem of maintaining a set of n integers in the range 0.2w–1 under the operations of insertion, deletion, predecessor queries, minimum queries and maximum queries on a unit cost RAM with word size w bits. Let f (n) be an arbitrary nondecreasing smooth function satisfying n...

  13. Comparing BV solutions of rate independent processes

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Pavel; Recupero, V.

    2014-01-01

    Roč. 21, č. 1 (2014), s. 121-146 ISSN 0944-6532 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : variational inequalities * rate independence * convex sets Subject RIV: BA - General Mathematics Impact factor: 0.552, year: 2014 http://www.heldermann.de/JCA/JCA21/JCA211/jca21006.htm

  14. Factors affecting seed set in brussels sprouts, radish and cyclamen

    NARCIS (Netherlands)

    Murabaa, El A.I.M.

    1957-01-01

    If brussels sprouts were, self-fertilized, seed setting increased with age of the flower buds until a maximum some days before buds opened. After that, set decreased rapidly. Warmth shortened the period over which selfing was possible and shortened the period to the opening of the flowers. Most

  15. International urodynamic basic spinal cord injury data set

    DEFF Research Database (Denmark)

    Craggs, M.; Kennelly, M.; Schick, E.

    2008-01-01

    of the data set was developed after review and comments by members of the Executive Committee of the International SCI Standards and Data Sets, the ISCoS Scientific Committee, ASIA Board, relevant and interested (international) organizations and societies (around 40) and persons and the ISCoS Council......: Variables included in the International Urodynamic Basic SCI Data Set are date of data collection, bladder sensation during filling cystometry, detrusor function, compliance during filing cystometry, function during voiding, detrusor leak point pressure, maximum detrusor pressure, cystometric bladder...

  16. Optimising Mycobacterium tuberculosis detection in resource limited settings.

    Science.gov (United States)

    Alfred, Nwofor; Lovette, Lawson; Aliyu, Gambo; Olusegun, Obasanya; Meshak, Panwal; Jilang, Tunkat; Iwakun, Mosunmola; Nnamdi, Emenyonu; Olubunmi, Onuoha; Dakum, Patrick; Abimiku, Alash'le

    2014-03-03

    The light-emitting diode (LED) fluorescence microscopy has made acid-fast bacilli (AFB) detection faster and efficient although its optimal performance in resource-limited settings is still being studied. We assessed the optimal performances of light and fluorescence microscopy in routine conditions of a resource-limited setting and evaluated the digestion time for sputum samples for maximum yield of positive cultures. Cross-sectional study. Facility-based involving samples of routine patients receiving tuberculosis treatment and care from the main tuberculosis case referral centre in northern Nigeria. The study included 450 sputum samples from 150 new patients with clinical diagnosis of pulmonary tuberculosis. The 450 samples were pooled into 150 specimens, examined independently with mercury vapour lamp (FM), LED CysCope (CY) and Primo Star iLED (PiLED) fluorescence microscopies, and with the Ziehl-Neelsen (ZN) microscopy to assess the performance of each technique compared with liquid culture. The cultured specimens were decontaminated with BD Mycoprep (4% NaOH-1% NLAC and 2.9% sodium citrate) for 10, 15 and 20 min before incubation in Mycobacterium growth incubator tube (MGIT) system and growth examined for acid-fast bacilli (AFB). Of the 150 specimens examined by direct microscopy: 44 (29%), 60 (40%), 49 (33%) and 64 (43%) were AFB positive by ZN, FM, CY and iLED microscopy, respectively. Digestion of sputum samples for 10, 15 and 20 min yielded mycobacterial growth in 72 (48%), 81 (54%) and 68 (45%) of the digested samples, respectively, after incubation in the MGIT system. In routine laboratory conditions of a resource-limited setting, our study has demonstrated the superiority of fluorescence microscopy over the conventional ZN technique. Digestion of sputum samples for 15 min yielded more positive cultures.

  17. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  18. Does combined strength training and local vibration improve isometric maximum force? A pilot study.

    Science.gov (United States)

    Goebel, Ruben; Haddad, Monoem; Kleinöder, Heinz; Yue, Zengyuan; Heinen, Thomas; Mester, Joachim

    2017-01-01

    The aim of the study was to determine whether a combination of strength training (ST) and local vibration (LV) improved the isometric maximum force of arm flexor muscles. ST was applied to the left arm of the subjects; LV was applied to the right arm of the same subjects. The main aim was to examine the effect of LV during a dumbbell biceps curl (Scott Curl) on isometric maximum force of the opposite muscle among the same subjects. It is hypothesized, that the intervention with LV produces a greater gain in isometric force of the arm flexors than ST. Twenty-seven collegiate students participated in the study. The training load was 70% of the individual 1 RM. Four sets with 12 repetitions were performed three times per week during four weeks. The right arm of all subjects represented the vibration trained body side (VS) and the left arm served as the traditional trained body side (TTS). A significant increase of isometric maximum force in both body sides (Arms) occurred. VS, however, significantly increased isometric maximum force about 43% in contrast to 22% of the TTS. The combined intervention of ST and LC improves isometric maximum force of arm flexor muscles. III.

  19. Analysis of factors that influence the maximum number of repetitions in two upper-body resistance exercises: curl biceps and bench press.

    Science.gov (United States)

    Iglesias, Eliseo; Boullosa, Daniel A; Dopico, Xurxo; Carballeira, Eduardo

    2010-06-01

    The purpose of this study was to analyze the influence of exercise type, set configuration, and relative intensity load on relationship between 1 repetition maximum (1RM) and maximum number of repetitions (MNR). Thirteen male subjects, experienced in resistance training, were tested in bench press and biceps curl for 1RM, MNR at 90% of 1RM with cluster set configuration (rest of 30s between repetitions) and MNR at 70% of 1RM with traditional set configuration (no rest between repetitions). A lineal encoder was used for measuring displacement of load. Analysis of variance analysis revealed a significant effect of load (pbench press and biceps curl, respectively; pbench press and biceps curl, respectively; p>0.05). Correlation between 1RM and MNR was significant for medium-intensity in biceps curl (r=-0.574; pvelocity along set, so velocity seems to be similar at a same relative intensity for subjects with differences in maximum strength levels. From our results, we suggest the employment of MNR rather than % of 1RM for training monitoring. Furthermore, we suggest the introduction of cluster set configuration for upper-body assessment of MNR and for upper-body muscular endurance training at high-intensity loads, as it seems an efficient approach in looking for sessions with greater training volumes. This could be an interesting approach for such sports as wrestling or weightlifting.

  20. An Estimator of Mutual Information and its Application to Independence Testing

    Directory of Open Access Journals (Sweden)

    Joe Suzuki

    2016-03-01

    Full Text Available This paper proposes a novel estimator of mutual information for discrete and continuous variables. The main feature of this estimator is that it is zero for a large sample size n if and only if the two variables are independent. The estimator can be used to construct several histograms, compute estimations of mutual information, and choose the maximum value. We prove that the number of histograms constructed has an upper bound of O(log n and apply this fact to the search. We compare the performance of the proposed estimator with an estimator of the Hilbert-Schmidt independence criterion (HSIC, though the proposed method is based on the minimum description length (MDL principle and the HSIC provides a statistical test. The proposed method completes the estimation in O(n log n time, whereas the HSIC kernel computation requires O(n3 time. We also present examples in which the HSIC fails to detect independence but the proposed method successfully detects it.

  1. One repetition maximum bench press performance: a new approach for its evaluation in inexperienced males and females: a pilot study.

    Science.gov (United States)

    Bianco, Antonino; Filingeri, Davide; Paoli, Antonio; Palma, Antonio

    2015-04-01

    The aim of this study was to evaluate a new method to perform the one repetition maximum (1RM) bench press test, by combining previously validated predictive and practical procedures. Eight young male and 7 females participants, with no previous experience of resistance training, performed a first set of repetitions to fatigue (RTF) with a workload corresponding to ⅓ of their body mass (BM) for a maximum of 25 repetitions. Following a 5-min recovery period, a second set of RTF was performed with a workload corresponding to ½ of participants' BM. The number of repetitions performed in this set was then used to predict the workload to be used for the 1RM bench press test using Mayhew's equation. Oxygen consumption, heart rate and blood lactate were monitored before, during and after each 1RM attempt. A significant effect of gender was found on the maximum number of repetitions achieved during the RTF set performed with ½ of participants' BM (males: 25.0 ± 6.3; females: 11.0x± 10.6; t = 6.2; p bench press test. We conclude that, by combining previously validated predictive equations with practical procedures (i.e. using a fraction of participants' BM to determine the workload for an RTF set), the new method we tested appeared safe, accurate (particularly in females) and time-effective in the practical evaluation of 1RM performance in inexperienced individuals. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. A formalism for independent checking of Gamma Knife dose calculations

    International Nuclear Information System (INIS)

    Tsai Jensan; Engler, Mark J.; Rivard, Mark J.; Mahajan, Anita; Borden, Jonathan A.; Zheng Zhen

    2001-01-01

    For stereotactic radiosurgery using the Leksell Gamma Knife system, it is important to perform a pre-treatment verification of the maximum dose calculated with the Leksell GammaPlan[reg] (D LGP ) stereotactic radiosurgery system. This verification can be incorporated as part of a routine quality assurance (QA) procedure to minimize the chance of a hazardous overdose. To implement this procedure, a formalism has been developed to calculate the dose D CAL (X,Y,Z,d av ,t) using the following parameters: average target depth (d av ), coordinates (X,Y,Z) of the maximum dose location or any other dose point(s) to be verified, 3-dimensional (3-dim) beam profiles or off-center-ratios (OCR) of the four helmets, helmet size i, output factor O i , plug factor P i , each shot j coordinates (x,y,z) i,j , and shot treatment time (t i,j ). The average depth of the target d av was obtained either from MRI/CT images or ruler measurements of the Gamma Knife Bubble Head Frame. D CAL and D LGP were then compared to evaluate the accuracy of this independent calculation. The proposed calculation for an independent check of D LGP has been demonstrated to be accurate and reliable, and thus serves as a QA tool for Gamma Knife stereotactic radiosurgery

  3. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  4. Shielding the spinal cord is necessary when junctioning abutting fields with independent collimation in head and neck radiotherapy

    International Nuclear Information System (INIS)

    Rosenthal, David I.; McDonough, James; Kassaee, Alireza

    1997-01-01

    Purpose: Asymmetric collimation is a relatively new method of junctioning abutting fields with non-diverging beam edges. When this technique is used at the junction of lateral and low anterior fields in three field head and neck set ups, there should, in theory, be a perfect match. There should be no overdose or underdose at the match line. We have performed dosimetric measurements to evaluate the actual dosimetry at the central axis. Materials and Methods: X-ray verification film was placed in a water-equivalent phantom at a depth of 4 cm, corresponding to an isocentric distance of 100 cm. A double exposure technique was used to mimic two half-beam blocked fields abutting at the central axis. Each half of the film was irradiated with 50 monitor units using a 6 MV photon beam. One of the collimators was set to an off-axis position to force a gap or overlap of the radiation fields at the isocenter in increments of 1 mm. The films were scanned with a laser densitometer with a resolution of 300 μm. The beam profiles were evaluated at the region of overdose or underdose around the match line. Results: The dose on the central axis varied linearly from - 50% (field gap of 3 mm) to + 50% (field overlap of 3 mm). Surprisingly, the width (defined as a full-width, half-maximum, FWHM) of the region of overdose or underdose around the match line is 3 mm for field gaps or overlaps of 1 and 2 mm. The width of the region is 4.5 mm for field gaps or overlaps of 3 mm. The larger than expected width of this region is due to the addition of the two abutting penumbras. Conclusion: Asymmetric collimation with half-beam blocks may overdose the spinal cord. Calibration specifications generally allow for a 1 mm tolerance in the position of each independent jaw. In a calibrated machine, this could lead to a 2 mm field overlap. A field overlap of just 1 mm results in a FWHM region of overdose measuring 3 mm with a maximum dose of 140%. To our knowledge, there are no current recommendations

  5. Overcoming Barriers in Unhealthy Settings

    Directory of Open Access Journals (Sweden)

    Michael K. Lemke

    2016-03-01

    Full Text Available We investigated the phenomenon of sustained health-supportive behaviors among long-haul commercial truck drivers, who belong to an occupational segment with extreme health disparities. With a focus on setting-level factors, this study sought to discover ways in which individuals exhibit resiliency while immersed in endemically obesogenic environments, as well as understand setting-level barriers to engaging in health-supportive behaviors. Using a transcendental phenomenological research design, 12 long-haul truck drivers who met screening criteria were selected using purposeful maximum sampling. Seven broad themes were identified: access to health resources, barriers to health behaviors, recommended alternative settings, constituents of health behavior, motivation for health behaviors, attitude toward health behaviors, and trucking culture. We suggest applying ecological theories of health behavior and settings approaches to improve driver health. We also propose the Integrative and Dynamic Healthy Commercial Driving (IDHCD paradigm, grounded in complexity science, as a new theoretical framework for improving driver health outcomes.

  6. Real time estimation of photovoltaic modules characteristics and its application to maximum power point operation

    Energy Technology Data Exchange (ETDEWEB)

    Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)

    2007-05-15

    In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)

  7. Optimal Classical Simulation of State-Independent Quantum Contextuality

    Science.gov (United States)

    Cabello, Adán; Gu, Mile; Gühne, Otfried; Xu, Zhen-Peng

    2018-03-01

    Simulating quantum contextuality with classical systems requires memory. A fundamental yet open question is what is the minimum memory needed and, therefore, the precise sense in which quantum systems outperform classical ones. Here, we make rigorous the notion of classically simulating quantum state-independent contextuality (QSIC) in the case of a single quantum system submitted to an infinite sequence of measurements randomly chosen from a finite QSIC set. We obtain the minimum memory needed to simulate arbitrary QSIC sets via classical systems under the assumption that the simulation should not contain any oracular information. In particular, we show that, while classically simulating two qubits tested with the Peres-Mermin set requires log224 ≈4.585 bits, simulating a single qutrit tested with the Yu-Oh set requires, at least, 5.740 bits.

  8. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  9. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    Science.gov (United States)

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  10. Guidelines for the marketing of independent schools in South Africa

    Directory of Open Access Journals (Sweden)

    Reaan Immelman

    2015-02-01

    Full Text Available Objective: The primary objective of the study is to recommend marketing guidelines for independent primary schools, with the focus on product and people in the marketing mix. This objective was achieved by identifying choice factors influencing parents’ selection of independent primary schools, identifying the most important choice factors and demographic differences regarding the importance parents attached to these factors. Problem investigated: Some independent schools in South Africa find it difficult to market themselves effectively as a result of a lack of information pertaining to the choice factors identified by parents when selecting independent primary schools. A comprehensive set of choice factors will provide a more accurate picture of the criteria parents perceive as important in independent school selection. Methodology: The methodological approach followed was exploratory and quantitative in nature. The sample consisted of 669 respondents from 30 independent schools in Gauteng in South Africa. A structured questionnaire, with a five-point Likert scale, was fielded to gather the data. The descriptive and factor analysis approaches were used to analyse the results. Findings and implications: The main finding is that a total of 29 different choice factors were identified that parents perceive as important when selecting an independent primary school. The most important factor for parents when making a choice is the small size of the classes, followed by the religious ethos of the school as well as qualified and committed educators. This indicates that parents have a comprehensive set of choice factors and implies that a better understanding of these factors by independent schools may assist them to focus their marketing efforts more optimally in order to attract new learners. Originality and value of the research: Very little research exists with specific reference to independent school marketing in South Africa

  11. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Science.gov (United States)

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  12. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Directory of Open Access Journals (Sweden)

    Malena Erbe

    Full Text Available Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]. The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20 cross-validation scenarios (50 replicates, random assignment were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010, augmented by a weighting factor (w based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text] was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  13. Multi-Objective Evaluation of Target Sets for Logistics Networks

    National Research Council Canada - National Science Library

    Emslie, Paul

    2000-01-01

    .... In the presence of many objectives--such as reducing maximum flow, lengthening routes, avoiding collateral damage, all at minimal risk to our pilots--the problem of determining the best target set is complex...

  14. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  15. On the maximum and minimum of two modified Gamma-Gamma variates with applications

    KAUST Repository

    Al-Quwaiee, Hessa

    2014-04-01

    In this work, we derive the statistical characteristics of the maximum and the minimum of two modified1 Gamma-Gamma variates in closed-form in terms of Meijer\\'s G-function and the extended generalized bivariate Meijer\\'s G-function. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii) a dual-hop free-space optical relay transmission system. Computer-based Monte-Carlo simulations verify our new analytical results.

  16. Stochastic inventory management at a service facility with a set of ...

    African Journals Online (AJOL)

    We consider a continuous review perishable inventory system at a service facility with a finite waiting capacity. The maximum inventory level is fixed and the customers arrive according to a Markov arrival process. The life time of each item and the service time are assumed to have independent exponential distributions.

  17. Progress in engineering design of Indian LLCB TBM set for testing in ITER

    International Nuclear Information System (INIS)

    Chaudhuri, Paritosh; Ranjithkumar, S.; Sharma, Deepak; Danani, Chandan; Swami, H.L.; Bhattacharya, R.; Patel, Anita; Kumar, E. Rajendra; Vyas, K.N.

    2014-01-01

    Highlights: • The tritium breeding for LLCB TBM has been evaluated by neutronic analysis. • Details of thermal-hydraulic analyses performed for FW and internal components of LLCB TBM and shield block have been provided.. • The optimum dimensions of CB zones and Pb–Li flow have been selected to have the maximum temperatures of all components used to lie within their respective temperature window. • The design and thermal analysis of shield block and attachment system have been performed. - Abstract: The Indian Lead–Lithium Ceramic Breeder (LLCB) Test Blanket Module (TBM) is the Indian DEMO relevant blanket module, as a part of the TBM program in ITER. The LLCB TBM will be tested from the first phase of ITER operation in one-half of an ITER port no. 2. LLCB TBM-set consists of LLCB TBM module and shield block, which are attached with the help of attachment systems. This LLCB TBM set is inserted in a water-cooled stainless steel frame called ‘TBM frame’, which also provides the separation between the neighboring TBM-sets (Chinese TBM set) in port no. 2. In LLCB TBM, high-pressure helium gas is used to cool the first wall (FW) structure and lead–lithium eutectic (Pb–Li) flowing separately around the ceramic breeder (CB) pebble bed to cool the TBM internals which are heated due to the volumetric neutron heating during plasma operation. Low-pressure helium is purged inside the CB zones to extract the bred tritium. Thermal-structural analyses have been performed independently on LLCB TBM and shield block for TBM set using ANSYS. This paper will also describe the performance analysis of individual components of LLCB TBM set and their different configurations to optimize their performances

  18. Determination of Maximum Follow-up Speed of Electrode System of Resistance Projection Welders

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2004-01-01

    the weld process settings for the stable production and high quality of products. In this paper, the maximum follow-up speed of electrode system was tested by using a special designed device which can be mounted to all types of machine and easily to be applied in industry, the corresponding mathematical...... expression was derived based on a mathematical model. Good accordance was found between test and model....

  19. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  20. Prediction of the maximum absorption wavelength of azobenzene dyes by QSPR tools

    Science.gov (United States)

    Xu, Xuan; Luan, Feng; Liu, Huitao; Cheng, Jianbo; Zhang, Xiaoyun

    2011-12-01

    The maximum absorption wavelength ( λmax) of a large data set of 191 azobenzene dyes was predicted by quantitative structure-property relationship (QSPR) tools. The λmax was correlated with the 4 molecular descriptors calculated from the structure of the dyes alone. The multiple linear regression method (MLR) and the non-linear radial basis function neural network (RBFNN) method were applied to develop the models. The statistical parameters provided by the MLR model were R2 = 0.893, Radj2=0.893, qLOO2=0.884, F = 1214.871, RMS = 11.6430 for the training set; and R2 = 0.849, Radj2=0.845, qext2=0.846, F = 207.812, RMS = 14.0919 for the external test set. The RBFNN model gave even improved statistical results: R2 = 0.920, Radj2=0.919, qLOO2=0.898, F = 1664.074, RMS = 9.9215 for the training set, and R2 = 0.895, Radj2=0.892, qext2=0.895, F = 314.256, RMS = 11.6427 for the external test set. This theoretical method provides a simple, precise and an alternative method to obtain λmax of azobenzene dyes.

  1. Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon

    DEFF Research Database (Denmark)

    Fischer, Paul

    1997-01-01

    This paper investigates the problem where one is given a finite set of n points in the plane each of which is labeled either ?positive? or ?negative?. We consider bounded convex polygons, the vertices of which are positive points and which do not contain any negative point. It is shown how...... such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...... becomes O(M n³ log n). It is also shown how to find a maximum convex polygon which contains a given point in time O(n³ log n). Two parallel algorithms for the basic problem are also presented. The first one runs in time O(n log n) using O(n²) processors, the second one has polylogarithmic time but needs O...

  2. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  3. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  4. An empirical test of the 'shark nursery area concept' in Texas bays using a long-term fisheries-independent data set

    Science.gov (United States)

    Froeschke, John T.; Stunz, Gregory W.; Sterba-Boatwright, Blair; Wildhaber, Mark L.

    2010-01-01

    Using a long-term fisheries-independent data set, we tested the 'shark nursery area concept' proposed by Heupel et al. (2007) with the suggested working assumptions that a shark nursery habitat would: (1) have an abundance of immature sharks greater than the mean abundance across all habitats where they occur; (2) be used by sharks repeatedly through time (years); and (3) see immature sharks remaining within the habitat for extended periods of time. We tested this concept using young-of-the-year (age 0) and juvenile (age 1+ yr) bull sharks Carcharhinus leucas from gill-net surveys conducted in Texas bays from 1976 to 2006 to estimate the potential nursery function of 9 coastal bays. Of the 9 bay systems considered as potential nursery habitat, only Matagorda Bay satisfied all 3 criteria for young-of-the-year bull sharks. Both Matagorda and San Antonio Bays met the criteria for juvenile bull sharks. Through these analyses we examined the utility of this approach for characterizing nursery areas and we also describe some practical considerations, such as the influence of the temporal or spatial scales considered when applying the nursery role concept to shark populations.

  5. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  6. Independent Study Workbooks for Proofs in Group Theory

    Science.gov (United States)

    Alcock, Lara; Brown, Gavin; Dunning, Clare

    2015-01-01

    This paper describes a small-scale research project based on workbooks designed to support independent study of proofs in a first course on abstract algebra. We discuss the lecturers' aims in designing the workbooks, and set these against a background of research on students' learning of group theory and on epistemological beliefs and study habits…

  7. Constitutive equations for the Doi-Edwards model without independent alignment

    DEFF Research Database (Denmark)

    Hassager, Ole; Hansen, Rasmus

    2010-01-01

    We present two representations of the Doi-Edwards model without Independent Alignment explicitly expressed in terms of the Finger strain tensor, its inverse and its invariants. The two representations provide explicit expressions for the stress prior to and after Rouse relaxation of chain stretch......, respectively. The maximum deviations from the exact representations in simple shear, biaxial extension and uniaxial extension are of order 2%. Based on these two representations, we propose a framework for Doi-Edwards models including chain stretch in the memory integral form....

  8. Firms’ Board Independence and Corporate Social Performance: A Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Eduardo Ortas

    2017-06-01

    Full Text Available This paper investigates the influence of organizations’ board independence on corporate social performance (CSP using a meta-analytic approach. A sample of 87 published papers is used to identify a set of underlying moderating effects in that relationship. Specifically, differences in the system of corporate governance, CSP measurement models and market conditions have been considered as moderating variables. The results show that the independence of a company’s board positively influences CSP. This is because companies with more independent directors in their boards are more likely to commit to stakeholder engagement, environmental preservation and community well-being. Interestingly, the results also show that the positive connection between board independence and CSP is stronger in civil law countries and when CSP is measured by self-reporting data. Finally, the strength of the influence of the independence of a firm’s board on CSP varies significantly in different market conditions. The paper concludes by presenting the main implications for academics, practitioners and policy makers.

  9. An efficient quantum scheme for Private Set Intersection

    Science.gov (United States)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.

  10. Comorbidity is an independent prognostic factor in women with uterine corpus cancer

    DEFF Research Database (Denmark)

    Noer, Mette C; Sperling, Cecilie; Christensen, Ib J

    2014-01-01

    OBJECTIVE: To determine whether comorbidity independently affects overall survival in women with uterine corpus cancer. DESIGN: Cohort study. SETTING: Denmark. STUDY POPULATION: A total of 4244 patients registered in the Danish Gynecologic Cancer database with uterine corpus cancer from 1 January....... RESULTS: Univariate survival analysis showed a significant (p independent prognostic factor with hazard ratios...... ranging from 1.27 to 1.42 in mild, 1.69 to 1.74 in moderate, and 1.72 to 2.48 in severe comorbidity. Performance status was independently associated to overall survival and was found to slightly reduce the prognostic impact of comorbidity. CONCLUSION: Comorbidity is an independent prognostic factor...

  11. Measuring conflict and power in strategic settings

    OpenAIRE

    Giovanni Rossi

    2009-01-01

    This is a quantitative approach to measuring conflict and power in strategic settings: noncooperative games (with cardinal or ordinal utilities) and blockings (without any preference specification). A (0, 1)-ranged index is provided, taking its minimum on common interest games, and its maximum on a newly introduced class termed “full conflict” games.

  12. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    Science.gov (United States)

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  13. Maximum Diameter Measurements of Aortic Aneurysms on Axial CT Images After Endovascular Aneurysm Repair: Sufficient for Follow-up?

    International Nuclear Information System (INIS)

    Baumueller, Stephan; Nguyen, Thi Dan Linh; Goetti, Robert Paul; Lachat, Mario; Seifert, Burkhardt; Pfammatter, Thomas; Frauenfelder, Thomas

    2011-01-01

    Purpose: To assess the accuracy of maximum diameter measurements of aortic aneurysms after endovascular aneurysm repair (EVAR) on axial computed tomographic (CT) images in comparison to maximum diameter measurements perpendicular to the intravascular centerline for follow-up by using three-dimensional (3D) volume measurements as the reference standard. Materials and Methods: Forty-nine consecutive patients (73 ± 7.5 years, range 51–88 years), who underwent EVAR of an infrarenal aortic aneurysm were retrospectively included. Two blinded readers twice independently measured the maximum aneurysm diameter on axial CT images performed at discharge, and at 1 and 2 years after intervention. The maximum diameter perpendicular to the centerline was automatically measured. Volumes of the aortic aneurysms were calculated by dedicated semiautomated 3D segmentation software (3surgery, 3mensio, the Netherlands). Changes in diameter of 0.5 cm and in volume of 10% were considered clinically significant. Intra- and interobserver agreements were calculated by intraclass correlations (ICC) in a random effects analysis of variance. The two unidimensional measurement methods were correlated to the reference standard. Results: Intra- and interobserver agreements for maximum aneurysm diameter measurements were excellent (ICC = 0.98 and ICC = 0.96, respectively). There was an excellent correlation between maximum aneurysm diameters measured on axial CT images and 3D volume measurements (r = 0.93, P < 0.001) as well as between maximum diameter measurements perpendicular to the centerline and 3D volume measurements (r = 0.93, P < 0.001). Conclusion: Measurements of maximum aneurysm diameters on axial CT images are an accurate, reliable, and robust method for follow-up after EVAR and can be used in daily routine.

  14. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  15. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  16. The Influence of Cognitive Learning Style and Learning Independence on the Students' Learning Outcomes

    Science.gov (United States)

    Prayekti

    2018-01-01

    Students of Open University are strongly required to be able to study independently. They rely heavily on the cognitive learning styles that they have in attempt to get maximum scores in every final exam. The participants of this research were students in the Physics Education program taking Thermodynamic subject course. The research analysis…

  17. Maximum wind energy extraction strategies using power electronic converters

    Science.gov (United States)

    Wang, Quincy Qing

    2003-10-01

    This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through

  18. Maximum principle for a stochastic delayed system involving terminal state constraints.

    Science.gov (United States)

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  19. EXTREME MAXIMUM AND MINIMUM AIR TEMPERATURE IN MEDİTERRANEAN COASTS IN TURKEY

    Directory of Open Access Journals (Sweden)

    Barbaros Gönençgil

    2016-01-01

    Full Text Available In this study, we determined extreme maximum and minimum temperatures in both summer and winter seasons at the stations in the Mediterranean coastal areas of Turkey.In the study, the data of 24 meteorological stations for the daily maximum and minimumtemperatures of the period from 1970–2010 were used. From this database, a set of four extreme temperature indices applied warm (TX90 and cold (TN10 days and warm spells (WSDI and cold spell duration (CSDI. The threshold values were calculated for each station to determine the temperatures that were above and below the seasonal norms in winter and summer. The TX90 index displays a positive statistically significant trend, while TN10 display negative nonsignificant trend. The occurrence of warm spells shows statistically significant increasing trend while the cold spells shows significantly decreasing trend over the Mediterranean coastline in Turkey.

  20. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  1. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  2. 12 CFR 1291.6 - Homeownership set-aside programs.

    Science.gov (United States)

    2010-01-01

    ... as part of a disaster relief effort. (3) Maximum grant amount. Members shall provide AHP direct... Section 1291.6 Banks and Banking FEDERAL HOUSING FINANCE AGENCY HOUSING GOALS AND MISSION FEDERAL HOME LOAN BANKS' AFFORDABLE HOUSING PROGRAM § 1291.6 Homeownership set-aside programs. (a) Establishment of...

  3. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  4. Statistic method of research reactors maximum permissible power calculation

    International Nuclear Information System (INIS)

    Grosheva, N.A.; Kirsanov, G.A.; Konoplev, K.A.; Chmshkyan, D.V.

    1998-01-01

    The technique for calculating maximum permissible power of a research reactor at which the probability of the thermal-process accident does not exceed the specified value, is presented. The statistical method is used for the calculations. It is regarded that the determining function related to the reactor safety is the known function of the reactor power and many statistically independent values which list includes the reactor process parameters, geometrical characteristics of the reactor core and fuel elements, as well as random factors connected with the reactor specific features. Heat flux density or temperature is taken as a limiting factor. The program realization of the method discussed is briefly described. The results of calculating the PIK reactor margin coefficients for different probabilities of the thermal-process accident are considered as an example. It is shown that the probability of an accident with fuel element melting in hot zone is lower than 10 -8 1 per year for the reactor rated power [ru

  5. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  6. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  7. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  8. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  9. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  10. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  11. Compositional models for credal sets

    Czech Academy of Sciences Publication Activity Database

    Vejnarová, Jiřina

    2017-01-01

    Roč. 90, č. 1 (2017), s. 359-373 ISSN 0888-613X R&D Projects: GA ČR(CZ) GA16-12010S Institutional support: RVO:67985556 Keywords : Imprecise probabilities * Credal sets * Multidimensional models * Conditional independence Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 2.845, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/vejnarova-0483288.pdf

  12. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  13. The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission

    Science.gov (United States)

    Woodgate, B. E.; Brandt, J. C.; Kalet, M. W.; Kenny, P. J.; Tandberg-Hanssen, E. A.; Bruner, E. C.; Beckers, J. M.; Henze, W.; Knox, E. D.; Hyder, C. L.

    1980-01-01

    The Ultraviolet Spectrometer and Polarimeter (UVSP) on the Solar Maximum Mission spacecraft is described, including the experiment objectives, system design, performance, and modes of operation. The instrument operates in the wavelength range 1150-3600 A with better than 2 arcsec spatial resolution, raster range 256 x 256 sq arcsec, and 20 mA spectral resolution in second order. Observations can be made with specific sets of four lines simultaneously, or with both sides of two lines simultaneously for velocity and polarization. A rotatable retarder can be inserted into the spectrometer beam for measurement of Zeeman splitting and linear polarization in the transition region and chromosphere.

  14. The ultraviolet spectrometer and polarimeter on the solar maximum mission

    International Nuclear Information System (INIS)

    Woodgate, B.E.; Brandt, J.C.; Kalet, M.W.; Kenny, P.J.; Beckers, J.M.; Henze, W.; Hyder, C.L.; Knox, E.D.

    1980-01-01

    The Ultraviolet Spectrometer and Polarimeter (UVSP) on the Solar Maximum Mission spacecraft is described, including the experiment objectives, system design. performance, and modes of operation. The instrument operates in the wavelength range 1150-3600 Angstreom with better than 2 arc sec spatial resolution, raster range 256 x 256 arc sec 2 , and 20 m Angstroem spectral resolution in second order. Observations can be made with specific sets of 4 lines simultaneously, or with both sides of 2 lines simultaneously for velocity and polarization. A rotatable retarder can be inserted into the spectrometer beam for measurement of Zeeman splitting and linear polarization in the transition region and chromosphere. (orig.)

  15. Calibrating EASY-Care independence scale to improve accuracy

    Science.gov (United States)

    Jotheeswaran, A. T.; Dias, Amit; Philp, Ian; Patel, Vikram; Prince, Martin

    2016-01-01

    Background there is currently limited support for the reliability and validity of the EASY-Care independence scale, with little work carried out in low- or middle-income countries. Therefore, we assessed the internal construct validity and hierarchical and classical scaling properties among frail dependent older people in the community. Objective we assessed the internal construct validity and hierarchical and classical scaling properties among frail dependent older people in the community. Methods three primary care physicians administered EASY-Care comprehensive geriatric assessment for 150 frail and/or dependent older people in the primary care setting. A Mokken model was applied to investigate hierarchical scaling properties of EASY-Care independence scale, and internal consistency (Cronbach's alpha) of the scale was also examined. Results we found that EASY-Care independence scale is highly internally consistent and is a strong hierarchical scale, hence providing strong evidence for unidimensionality. However, two items in the scale (unable to use telephone and manage finances) had much lower item Loevinger H coefficients than others. Exclusion of these two items improved the overall internal consistency of the scale. Conclusions the strong performance of the EASY-Care independence scale among community-dwelling frail older people is encouraging. This study confirms that EASY-Care independence scale is highly internally consistent and a strong hierarchical scale. PMID:27496925

  16. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    Science.gov (United States)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  17. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    International Nuclear Information System (INIS)

    Kaya, Savaş; Kaya, Cemal; Islam, Nazmul

    2016-01-01

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  18. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)

    2016-03-15

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  19. LIBOR troubles: Anomalous movements detection based on maximum entropy

    Science.gov (United States)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  20. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  1. Dynamical pruning of static localized basis sets in time-dependent quantum dynamics

    NARCIS (Netherlands)

    McCormack, D.A.

    2006-01-01

    We investigate the viability of dynamical pruning of localized basis sets in time-dependent quantum wave packet methods. Basis functions that have a very small population at any given time are removed from the active set. The basis functions themselves are time independent, but the set of active

  2. CytoMCS: A Multiple Maximum Common Subgraph Detection Tool for Cytoscape

    DEFF Research Database (Denmark)

    Larsen, Simon; Baumbach, Jan

    2017-01-01

    such analyses we have developed CytoMCS, a Cytoscape app for computing inexact solutions to the maximum common edge subgraph problem for two or more graphs. Our algorithm uses an iterative local search heuristic for computing conserved subgraphs, optimizing a squared edge conservation score that is able...... to detect not only fully conserved edges but also partially conserved edges. It can be applied to any set of directed or undirected, simple graphs loaded as networks into Cytoscape, e.g. protein-protein interaction networks or gene regulatory networks. CytoMCS is available as a Cytoscape app at http://apps.cytoscape.org/apps/cytomcs....

  3. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    Science.gov (United States)

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  4. Energy-Independent Architectural Models for Residential Complex Plans through Solar Energy in Daegu Metropolitan City, South Korea

    Directory of Open Access Journals (Sweden)

    Sung-Yul Kim

    2018-02-01

    Full Text Available This study suggests energy-independent architectural models for residential complexes through the production of solar-energy-based renewable energy. Daegu Metropolitan City, South Korea, was selected as the target area for the residential complex. An optimal location in the area was selected to maximize the production of solar-energy-based renewable energy. Then, several architectural design models were developed. Next, after analyzing the energy-use patterns of each design model, economic analyses were conducted considering the profits generated from renewable-energy use. In this way, the optimum residential building model was identified. For this site, optimal solar power generation efficiency was obtained when solar panels were installed at 25° angles. Thus, the sloped roof angles were set to 25°, and the average height of the internal space of the highest floor was set to 1.8 m. Based on this model, analyses were performed regarding energy self-sufficiency improvement and economics. It was verified that connecting solar power generation capacity from a zero-energy perspective considering the consumer’s amount of power consumption was more effective than connecting maximum solar power generation capacity according to building structure. Moreover, it was verified that selecting a subsidizable solar power generation capacity according to the residential solar power facility connection can maximize operational benefits.

  5. Some aspects of transformation of the nonlinear plasma equations to the space-independent frame

    International Nuclear Information System (INIS)

    Paul, S.N.; Chakraborty, B.

    1982-01-01

    Relativistically correct transformation of nonlinear plasma equations are derived in a space-independent frame. This transformation is useful in many ways because in place of partial differential equations one obtains a set of ordinary differential equations in a single independent variable. Equations of Akhiezer and Polovin (1956) for nonlinear plasma oscillations have been generalized and the results of Arons and Max (1974), and others for wave number shift and precessional rotation of electromagnetic wave are recovered in a space-independent frame. (author)

  6. The Usher lifestyle survey : maintaining independence: a multi-centre study

    NARCIS (Netherlands)

    Damen, Godelieve W J A; Krabbe, Paul F M; Kilsby, M; Mylanus, Emmanuel A M

    2005-01-01

    Patients with Usher syndrome face a special set of challenges in order to maintain their independence when their sight and hearing worsen. Three different types of Usher (I, II and III) are distinguished by differences in onset, progression and severity of hearing loss, and by the presence or

  7. The Usher lifestyle survey: maintaining independence: a multi-centre study.

    NARCIS (Netherlands)

    Damen, G.W.J.A.; Krabbe, P.F.M.; Kilsby, M.; Mylanus, E.A.M.

    2005-01-01

    Patients with Usher syndrome face a special set of challenges in order to maintain their independence when their sight and hearing worsen. Three different types of Usher (I, II and III) are distinguished by differences in onset, progression and severity of hearing loss, and by the presence or

  8. Maximum production rate optimization for sulphuric acid decomposition process in tubular plug-flow reactor

    International Nuclear Information System (INIS)

    Wang, Chao; Chen, Lingen; Xia, Shaojun; Sun, Fengrui

    2016-01-01

    A sulphuric acid decomposition process in a tubular plug-flow reactor with fixed inlet flow rate and completely controllable exterior wall temperature profile and reactants pressure profile is studied in this paper by using finite-time thermodynamics. The maximum production rate of the aimed product SO 2 and the optimal exterior wall temperature profile and reactants pressure profile are obtained by using nonlinear programming method. Then the optimal reactor with the maximum production rate is compared with the reference reactor with linear exterior wall temperature profile and the optimal reactor with minimum entropy generation rate. The result shows that the production rate of SO 2 of optimal reactor with the maximum production rate has an increase of more than 7%. The optimization of temperature profile has little influence on the production rate while the optimization of reactants pressure profile can significantly increase the production rate. The results obtained may provide some guidelines for the design of real tubular reactors. - Highlights: • Sulphuric acid decomposition process in tubular plug-flow reactor is studied. • Fixed inlet flow rate and controllable temperature and pressure profiles are set. • Maximum production rate of aimed product SO 2 is obtained. • Corresponding optimal temperature and pressure profiles are derived. • Production rate of SO 2 of optimal reactor increases by 7%.

  9. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  10. Application of independent component analysis to H-1 MR spectroscopic imaging exams of brain tumours

    NARCIS (Netherlands)

    Szabo de Edelenyi, F.; Simonetti, A.W.; Postma, G.; Huo, R.; Buydens, L.M.C.

    2005-01-01

    The low spatial resolution of clinical H-1 MRSI leads to partial volume effects. To overcome this problem, we applied independent component analysis (ICA) on a set of H-1 MRSI exams of brain turnours. With this method, tissue types that yield statistically independent spectra can be separated. Up to

  11. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  12. Fine particles from Independence Day fireworks events: chemical characterization and source apportionment

    Science.gov (United States)

    Zhang, J.; Lance, S.; Freedman, J. M.; Yele, S.; Crandall, B.; Wei, X.; Schwab, J. J.

    2017-12-01

    To study the impact of fireworks (FW) events on air quality, aerosol particles from FW displays were measured using a High-Resolution Time-of-Flight Aerosol Mass Spectrometer (HR-ToF-AMS) and collocated instruments during the Independence Day holiday 2017 in Albany, NY. Three FW events were identified through potassium ion (K+) signals in the mass spectra. The largest FW event signal measured at two different locations was the Independence Day celebration in Albany, with maximum aerosol concentrations of about 55 ug/m3 at the downtown site and 35 ug/m3 at the uptown site. The aerosol concentration peaked at the uptown site about 2 hours later than at the downtown site. FW events resulted in significant increases in both organic and inorganic (K+, sulfate, chloride) compounds. Among the organics, Positive Matrix Factorization (PMF) identified one special FW organic aerosol factor (FW-OA), which was highly oxidized. The intense emission of FW particles from the Independence Day celebration contributed 76% of total PM1 at the uptown site. The aerosol and wind LiDAR measurements showed two distinct pollution sources, one from the Independence Day FW event in Albany, and another aerosol source transported from other areas, potentially associated with other town's FW events.

  13. Almost Free Modules Set-Theoretic Methods

    CERN Document Server

    Eklof, PC

    1990-01-01

    This is an extended treatment of the set-theoretic techniques which have transformed the study of abelian group and module theory over the last 15 years. Part of the book is new work which does not appear elsewhere in any form. In addition, a large body of material which has appeared previously (in scattered and sometimes inaccessible journal articles) has been extensively reworked and in many cases given new and improved proofs. The set theory required is carefully developed with algebraists in mind, and the independence results are derived from explicitly stated axioms. The book contains exe

  14. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  15. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  16. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  17. Optimal Control of Hypersonic Planning Maneuvers Based on Pontryagin’s Maximum Principle

    Directory of Open Access Journals (Sweden)

    A. Yu. Melnikov

    2015-01-01

    Full Text Available The work objective is the synthesis of simple analytical formula of the optimal roll angle of hypersonic gliding vehicles for conditions of quasi-horizontal motion, allowing its practical implementation in onboard control algorithms.The introduction justifies relevance, formulates basic control tasks, and describes a history of scientific research and achievements in the field concerned. The author reveals a common disadvantage of the other authors’ methods, i.e. the problem of practical implementation in onboard control algorithms.The similar tasks of hypersonic maneuvers are systemized according to the type of maneuver, control parameters and limitations.In the statement of the problem the glider launched horizontally with a suborbital speed glides passive in the static Atmosphere on a spherical surface of constant radius in the Central field of gravitation.The work specifies a system of equations of motion in the inertial spherical coordinate system, sets the limits on the roll angle and optimization criteria at the end of the flight: high speed or azimuth and the minimum distances to the specified geocentric points.The solution.1 A system of equations of motion is transformed by replacing the time argument with another independent argument – the normal equilibrium overload. The Hamiltonian and the equations of mated parameters are obtained using the Pontryagin’s maximum principle. The number of equations of motion and mated vector is reduced.2 The mated parameters were expressed by formulas using current movement parameters. The formulas are proved through differentiation and substitution in the equations of motion.3 The Formula of optimal roll-position control by condition of maximum is obtained. After substitution of mated parameters, the insertion of constants, and trigonometric transformations the Formula of the optimal roll angle is obtained as functions of the current parameters of motion.The roll angle is expressed as the ratio

  18. Influence maximization in social networks under an independent cascade-based model

    Science.gov (United States)

    Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan

    2016-02-01

    The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.

  19. WMAXC: a weighted maximum clique method for identifying condition-specific sub-network.

    Directory of Open Access Journals (Sweden)

    Bayarbaatar Amgalan

    Full Text Available Sub-networks can expose complex patterns in an entire bio-molecular network by extracting interactions that depend on temporal or condition-specific contexts. When genes interact with each other during cellular processes, they may form differential co-expression patterns with other genes across different cell states. The identification of condition-specific sub-networks is of great importance in investigating how a living cell adapts to environmental changes. In this work, we propose the weighted MAXimum clique (WMAXC method to identify a condition-specific sub-network. WMAXC first proposes scoring functions that jointly measure condition-specific changes to both individual genes and gene-gene co-expressions. It then employs a weaker formula of a general maximum clique problem and relates the maximum scored clique of a weighted graph to the optimization of a quadratic objective function under sparsity constraints. We combine a continuous genetic algorithm and a projection procedure to obtain a single optimal sub-network that maximizes the objective function (scoring function over the standard simplex (sparsity constraints. We applied the WMAXC method to both simulated data and real data sets of ovarian and prostate cancer. Compared with previous methods, WMAXC selected a large fraction of cancer-related genes, which were enriched in cancer-related pathways. The results demonstrated that our method efficiently captured a subset of genes relevant under the investigated condition.

  20. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  1. Effect of PVRC damping with independent support motion response spectrum analysis of piping systems

    International Nuclear Information System (INIS)

    Wang, Y.K.; Bezler, P.; Shteyngart, S.

    1986-01-01

    The Technical Committee for Piping Systems of the Pressure Vessel Research Committee (PVRC) has recommended new damping values to be used in the seismic analyses of piping systems in nuclear power plants. To evaluate the effects of coupling these recommendations with the use of independent support motion analyses methods, two sets of seismic analyses have been carried out for several piping systems. One set based on the use of uniform damping as specified in Regulatory Guide 1.61, the other based on the PVRC recommendations. In each set the analyses were performed using independent support motion time history and response spectrum methods as well as the envelope spectrum method. In the independent response spectrum analyses, 14 response estimates were in fact obtained by considering different combination procedures between the support group contributions and all sequences of combinations between support groups, modes and directions. For each analysis set, the response spectrum results were compared with time history estimates of those results. Comparison tables were then prepared depicting the percentage by which the response spectrum estimates exceeded the time history estimates. By comparing the result tables between both analysis sets, the impact of PVRC damping can be observed. Preliminary results show that the degree of exceedance of the response spectrum estimates based on PVRC damping is less than that based on uniform damping for the same piping problem. Expressed differently the results obtained if ISM methods are coupled with PVRC damping are not as conservative as those obtained using uniform damping

  2. Nonlinear analysis of vehicle control actuations based on controlled invariant sets

    Directory of Open Access Journals (Sweden)

    Németh Balázs

    2016-03-01

    Full Text Available In the paper, an analysis method is applied to the lateral stabilization problem of vehicle systems. The aim is to find the largest state-space region in which the lateral stability of the vehicle can be guaranteed by the peak-bounded control input. In the analysis, the nonlinear polynomial sum-of-squares programming method is applied. A practical computation technique is developed to calculate the maximum controlled invariant set of the system. The method calculates the maximum controlled invariant sets of the steering and braking control systems at various velocities and road conditions. Illustration examples show that, depending on the environments, different vehicle dynamic regions can be reached and stabilized by these controllers. The results can be applied to the theoretical basis of their interventions into the vehicle control system.

  3. Parallelization of maximum likelihood fits with OpenMP and CUDA

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; Pantaleo, F

    2011-01-01

    Data analyses based on maximum likelihood fits are commonly used in the high energy physics community for fitting statistical models to data samples. This technique requires the numerical minimization of the negative log-likelihood function. MINUIT is the most common package used for this purpose in the high energy physics community. The main algorithm in this package, MIGRAD, searches the minimum by using the gradient information. The procedure requires several evaluations of the function, depending on the number of free parameters and their initial values. The whole procedure can be very CPU-time consuming in case of complex functions, with several free parameters, many independent variables and large data samples. Therefore, it becomes particularly important to speed-up the evaluation of the negative log-likelihood function. In this paper we present an algorithm and its implementation which benefits from data vectorization and parallelization (based on OpenMP) and which was also ported to Graphics Processi...

  4. Domain Independent Vocabulary Generation and Its Use in Category-based Small Footprint Language Model

    Directory of Open Access Journals (Sweden)

    KIM, K.-H.

    2011-02-01

    Full Text Available The work in this paper pertains to domain independent vocabulary generation and its use in category-based small footprint Language Model (LM. Two major constraints of the conventional LMs in the embedded environment are memory capacity limitation and data sparsity for the domain-specific application. This data sparsity adversely affects vocabulary coverage and LM performance. To overcome these constraints, we define a set of domain independent categories using a Part-Of-Speech (POS tagged corpus. Also, we generate a domain independent vocabulary based on this set using the corpus and knowledge base. Then, we propose a mathematical framework for a category-based LM using this set. In this LM, one word can be assigned assign multiple categories. In order to reduce its memory requirements, we propose a tree-based data structure. In addition, we determine the history length of a category n-gram, and the independent assumption applying to a category history generation. The proposed vocabulary generation method illustrates at least 13.68% relative improvement in coverage for a SMS text corpus, where data are sparse due to the difficulties in data collection. The proposed category-based LM requires only 215KB which is 55% and 13% compared to the conventional category-based LM and the word-based LM, respectively. It successively improves the performance, achieving 54.9% and 60.6% perplexity reduction compared to the conventional category-based LM and the word-based LM in terms of normalized perplexity.

  5. Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows

    DEFF Research Database (Denmark)

    Karacaören, Burak; Janss, Luc; Kadarmideen, Haja

    Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models of ...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available......Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...

  6. Hydraulic limits on maximum plant transpiration and the emergence of the safety-efficiency trade-off.

    Science.gov (United States)

    Manzoni, Stefano; Vico, Giulia; Katul, Gabriel; Palmroth, Sari; Jackson, Robert B; Porporato, Amilcare

    2013-04-01

    Soil and plant hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon uptake by leaves. While more negative xylem water potentials provide a larger driving force for water transport, they also cause cavitation that limits hydraulic conductivity. An optimum balance between driving force and cavitation occurs at intermediate water potentials, thus defining the maximum transpiration rate the xylem can sustain (denoted as E(max)). The presence of this maximum raises the question as to whether plants regulate transpiration through stomata to function near E(max). To address this question, we calculated E(max) across plant functional types and climates using a hydraulic model and a global database of plant hydraulic traits. The predicted E(max) compared well with measured peak transpiration across plant sizes and growth conditions (R = 0.86, P efficiency trade-off in plant xylem. Stomatal conductance allows maximum transpiration rates despite partial cavitation in the xylem thereby suggesting coordination between stomatal regulation and xylem hydraulic characteristics. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  7. Realworld maximum power point tracking simulation of PV system based on Fuzzy Logic control

    Science.gov (United States)

    Othman, Ahmed M.; El-arini, Mahdi M. M.; Ghitas, Ahmed; Fathy, Ahmed

    2012-12-01

    In the recent years, the solar energy becomes one of the most important alternative sources of electric energy, so it is important to improve the efficiency and reliability of the photovoltaic (PV) systems. Maximum power point tracking (MPPT) plays an important role in photovoltaic power systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize their array efficiency. This paper presents a maximum power point tracker (MPPT) using Fuzzy Logic theory for a PV system. The work is focused on the well known Perturb and Observe (P&O) algorithm and is compared to a designed fuzzy logic controller (FLC). The simulation work dealing with MPPT controller; a DC/DC Ćuk converter feeding a load is achieved. The results showed that the proposed Fuzzy Logic MPPT in the PV system is valid.

  8. Search for the maximum efficiency of a ribbed-surfaces device, providing a tight seal

    International Nuclear Information System (INIS)

    Boutin, Jeanne.

    1977-04-01

    The purpose of this experiment was to determine the geometrical characteristics of ribbed surfaces used to equip devices in translation or slow rotation motion and having to form an acceptable seal between slightly viscous fluids. It systematically studies the pressure loss coefficient lambda in function of the different parameters setting the form of ribs and their relative position on the opposite sides. It shows that the passages with two ribbed surfaces lead to highly better results than those with only one, the maximum value of lambda, equal to 0.5, being obtained with the ratios: pitch/clearance = 5, depth of groove/clearance = 1,2, and with their teeth face to face on the two opposite ribbed surfaces. With certain shapes, alternate position of ribs can lead to the maximum of lambda yet lower than 0.5 [fr

  9. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  10. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review

    Science.gov (United States)

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on

  11. Source-Based Tasks in Writing Independent and Integrated Essays

    Directory of Open Access Journals (Sweden)

    Javad Gholami

    2017-07-01

    Full Text Available Integrated writing tasks have gained considerable attention in ESL and EFL writing assessment and are frequently needed and used in academic settings and daily life. However, they are very rarely practiced and promoted in writing classes. This paper explored the effects of source-based writing practice on EFL learners’ composing abilities and investigated the probable differences between those tasks and independent writing ones in improving Iranian EFL learners’ essay writing abilities. To this end, a quasi-experimental design was implemented to gauge EFL learners’ writing improvements using a pretest-posttest layout. Twenty female learners taking a TOEFL iBT preparation course were randomly divided into an only-writing group with just independent writing instruction and essay practice, and a hybrid-writing-approach group receiving instruction and practice on independent writing plus source-based essay writing for ten sessions. Based on the findings, the participants with hybrid writing practice outperformed their counterparts in integrated essay tests. Their superior performance was not observed in the case of traditional independent writing tasks. The present study calls for incorporating more source-based writing tasks in writing courses.

  12. Measurement-Device-Independent Approach to Entanglement Measures

    Science.gov (United States)

    Shahandeh, Farid; Hall, Michael J. W.; Ralph, Timothy C.

    2017-04-01

    Within the context of semiquantum nonlocal games, the trust can be removed from the measurement devices in an entanglement-detection procedure. Here, we show that a similar approach can be taken to quantify the amount of entanglement. To be specific, first, we show that in this context, a small subset of semiquantum nonlocal games is necessary and sufficient for entanglement detection in the local operations and classical communication paradigm. Second, we prove that the maximum payoff for these games is a universal measure of entanglement which is convex and continuous. Third, we show that for the quantification of negative-partial-transpose entanglement, this subset can be further reduced down to a single arbitrary element. Importantly, our measure is measurement device independent by construction and operationally accessible. Finally, our approach straightforwardly extends to quantify the entanglement within any partitioning of multipartite quantum states.

  13. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  14. Dust fluxes and iron fertilization in Holocene and Last Glacial Maximum climates

    Science.gov (United States)

    Lambert, Fabrice; Tagliabue, Alessandro; Shaffer, Gary; Lamy, Frank; Winckler, Gisela; Farias, Laura; Gallardo, Laura; De Pol-Holz, Ricardo

    2015-07-01

    Mineral dust aerosols play a major role in present and past climates. To date, we rely on climate models for estimates of dust fluxes to calculate the impact of airborne micronutrients on biogeochemical cycles. Here we provide a new global dust flux data set for Holocene and Last Glacial Maximum (LGM) conditions based on observational data. A comparison with dust flux simulations highlights regional differences between observations and models. By forcing a biogeochemical model with our new data set and using this model's results to guide a millennial-scale Earth System Model simulation, we calculate the impact of enhanced glacial oceanic iron deposition on the LGM-Holocene carbon cycle. On centennial timescales, the higher LGM dust deposition results in a weak reduction of pump. This is followed by a further ~10 ppm reduction over millennial timescales due to greater carbon burial and carbonate compensation.

  15. Applications of the principle of maximum entropy: from physics to ecology.

    Science.gov (United States)

    Banavar, Jayanth R; Maritan, Amos; Volkov, Igor

    2010-02-17

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.

  16. Applications of the principle of maximum entropy: from physics to ecology

    International Nuclear Information System (INIS)

    Banavar, Jayanth R; Volkov, Igor; Maritan, Amos

    2010-01-01

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori. (topical review)

  17. Central Bank independence

    Directory of Open Access Journals (Sweden)

    Vasile DEDU

    2012-08-01

    Full Text Available In this paper we present the key aspects regarding central bank’s independence. Most economists consider that the factor which positively influences the efficiency of monetary policy measures is the high independence of the central bank. We determined that the National Bank of Romania (NBR has a high degree of independence. NBR has both goal and instrument independence. We also consider that the hike of NBR’s independence played an important role in the significant disinflation process, as headline inflation dropped inside the targeted band of 3% ± 1 percentage point recently.

  18. Independent histogram pursuit for segmentation of skin lesions

    DEFF Research Database (Denmark)

    Gomez, D.D.; Butakoff, C.; Ersbøll, Bjarne Kjær

    2008-01-01

    In this paper, an unsupervised algorithm, called the Independent Histogram Pursuit (HIP), for segmenting dermatological lesions is proposed. The algorithm estimates a set of linear combinations of image bands that enhance different structures embedded in the image. In particular, the first estima...... to deal with different types of dermatological lesions. The boundary detection precision using k-means segmentation was close to 97%. The proposed algorithm can be easily combined with the majority of classification algorithms....

  19. Spatially independent martingales, intersections, and applications

    CERN Document Server

    Shmerkin, Pablo

    2018-01-01

    The authors define a class of random measures, spatially independent martingales, which we view as a natural generalization of the canonical random discrete set, and which includes as special cases many variants of fractal percolation and Poissonian cut-outs. The authors pair the random measures with deterministic families of parametrized measures \\{\\eta_t\\}_t, and show that under some natural checkable conditions, a.s. the mass of the intersections is H�lder continuous as a function of t. This continuity phenomenon turns out to underpin a large amount of geometric information about these measures, allowing us to unify and substantially generalize a large number of existing results on the geometry of random Cantor sets and measures, as well as obtaining many new ones. Among other things, for large classes of random fractals they establish (a) very strong versions of the Marstrand-Mattila projection and slicing results, as well as dimension conservation, (b) slicing results with respect to algebraic curves a...

  20. The Maximum Entropy Method for Optical Spectrum Analysis of Real-Time TDDFT

    International Nuclear Information System (INIS)

    Toogoshi, M; Kano, S S; Zempo, Y

    2015-01-01

    The maximum entropy method (MEM) is one of the key techniques for spectral analysis. The major feature is that spectra in the low frequency part can be described by the short time-series data. Thus, we applied MEM to analyse the spectrum from the time dependent dipole moment obtained from the time-dependent density functional theory (TDDFT) calculation in real time. It is intensively studied for computing optical properties. In the MEM analysis, however, the maximum lag of the autocorrelation is restricted by the total number of time-series data. We proposed that, as an improved MEM analysis, we use the concatenated data set made from the several-times repeated raw data. We have applied this technique to the spectral analysis of the TDDFT dipole moment of ethylene and oligo-fluorene with n = 8. As a result, the higher resolution can be obtained, which is closer to that of FT with practically time-evoluted data as the same total number of time steps. The efficiency and the characteristic feature of this technique are presented in this paper. (paper)

  1. A suitable model plant for control of the set fuel cell-DC/DC converter

    Energy Technology Data Exchange (ETDEWEB)

    Andujar, J.M.; Segura, F.; Vasallo, M.J. [Departamento de Ingenieria Electronica, Sistemas Informaticos y Automatica, E.P.S. La Rabida, Universidad de Huelva, Ctra. Huelva - Palos de la Frontera, S/N, 21819 La Rabida - Palos de la Frontera Huelva (Spain)

    2008-04-15

    In this work a state and transfer function model of the set made up of a proton exchange membrane (PEM) fuel cell and a DC/DC converter is developed. The set is modelled as a plant controlled by the converter duty cycle. In addition to allow setting the plant operating point at any point of its characteristic curve (two interesting points are maximum efficiency and maximum power points), this approach also allows the connection of the fuel cell to other energy generation and storage devices, given that, as they all usually share a single DC bus, a thorough control of the interconnected devices is required. First, the state and transfer function models of the fuel cell and the converter are obtained. Then, both models are related in order to achieve the fuel cell+DC/DC converter set (plant) model. The results of the theoretical developments are validated by simulation on a real fuel cell model. (author)

  2. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  3. The effect of independent collimator misalignment on the dosimetry of abutted half-beam blocked fields for the treatment of head and neck cancer

    International Nuclear Information System (INIS)

    Rosenthal, D.I.; McDonough, J.; Kassaee, A.

    1998-01-01

    Background and purpose: Independent collimation conveniently allows for the junctioning of abutting fields with non-diverging beam edges. When this technique is used at the junction of multiple fields, e.g. lateral and low anterior fields in three-field head and neck set-ups, there should be a dosimetric match with no overdose or underdose at the matchline. We set out to evaluate the actual dosimetry at the central match plane. Materials and methods: Independent jaws were used to mimic two half-beam blocked fields abutting at the central axis. X-Ray verification film was exposed in a water-equivalent phantom and the dose at the matchline was evaluated with laser densitometry. Collimators were then programmed to force a gap or overlap of the radiation fields to evaluate the effect of jaw misalignment within the tolerance of the manufacturer's specification. Diode measurements of the field edges were also performed. Four beam energies from four different linear accelerators were evaluated. Results: Small systematic inhomogeneities were found along the matchline in all linear accelerators tested. The maximum dose on the central axis varied linearly with small programmed jaw misalignments. For a gap or overlap of 2 mm between the jaws, the matchline dose increased or decreased by 30-40%. The region of overdose or underdose around the matchline is 3-4 mm wide. The discrepancy between the width of jaw separation and the width of the region of altered dose is explained by a penumbra effect.Conclusion: We recommend that independent jaw alignment be evaluated routinely and provide a simple method to estimate dose inhomogeneity at the match plane. If there is a field gap or overlap resulting in a clinically significant change in dosimetry, jaw misalignment should be corrected. If it cannot be corrected, part of the benefit of asymmetric collimation is lost and other methods of field junctioning may have to be considered. We routinely use a small block over the spinal cord at

  4. Lower bounds on the independence number of certain graphs of odd girth at least seven

    DEFF Research Database (Denmark)

    Pedersen, A. S.; Rautenbach, D.; Regen, F.

    2011-01-01

    Heckman and Thomas [C.C. Heckman, R. Thomas, A new proof of the independence ratio of triangle-free cubic graphs, Discrete Math. 233 (2001) 233-237] proved that every connected subcubic triangle-free graph G has an independent set of order at least (4n(G) - m(G) - 1)/7 where n(G) and m(G) denote...

  5. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  6. Probable relationship between partitions of the set of codons and the origin of the genetic code.

    Science.gov (United States)

    Salinas, Dino G; Gallardo, Mauricio O; Osorio, Manuel I

    2014-03-01

    Here we study the distribution of randomly generated partitions of the set of amino acid-coding codons. Some results are an application from a previous work, about the Stirling numbers of the second kind and triplet codes, both to the cases of triplet codes having four stop codons, as in mammalian mitochondrial genetic code, and hypothetical doublet codes. Extending previous results, in this work it is found that the most probable number of blocks of synonymous codons, in a genetic code, is similar to the number of amino acids when there are four stop codons, as well as it could be for a primigenious doublet code. Also it is studied the integer partitions associated to patterns of synonymous codons and it is shown, for the canonical code, that the standard deviation inside an integer partition is one of the most probable. We think that, in some early epoch, the genetic code might have had a maximum of the disorder or entropy, independent of the assignment between codons and amino acids, reaching a state similar to "code freeze" proposed by Francis Crick. In later stages, maybe deterministic rules have reassigned codons to amino acids, forming the natural codes, such as the canonical code, but keeping the numerical features describing the set partitions and the integer partitions, like a "fossil numbers"; both kinds of partitions about the set of amino acid-coding codons. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?

    Science.gov (United States)

    Meyer-Vernet, Nicole; Rospars, Jean-Pierre

    2016-12-01

    Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.

  8. Provision of financial transmission rights including assessment of maximum volumes of obligations and options

    International Nuclear Information System (INIS)

    Kristiansen, Tarjei

    2007-01-01

    This paper studies the risks faced by the providers of financial transmission rights (FTRs). The introduction of FTRs in different systems in the USA must be viewed in relationship to the organization of the market. Often, private players own the central grid, while an independent system operator (ISO) operates the grid. The revenues from transmission congestion collected in the day-ahead and balancing markets should give the ISO sufficient revenues to cover the costs associated with providing FTRs. This can be ensured if the issued FTRs fulfill the simultaneous feasibility test described by Hogan. This test on a three-node network is studied under different assumptions to find the maximum volumes, which can be sold, including contingency constraints. Next the feasibility test is analyzed when taking into account the proceeds from the FTR auction, and demonstrates that a higher volume might be issued. We introduce uncertainty under different scenarios for locational prices and calculate the maximum provided volumes. As a tool for risk management, the provider of the FTRs can use the Value at Risk approach. Finally, the provision of FTRs by private parties is discussed. (author)

  9. Effects of drop sets with resistance training on increases in muscle CSA, strength, and endurance: a pilot study.

    Science.gov (United States)

    Ozaki, Hayao; Kubota, Atsushi; Natsume, Toshiharu; Loenneke, Jeremy P; Abe, Takashi; Machida, Shuichi; Naito, Hisashi

    2018-03-01

    To investigate the effects of a single high-load (80% of one repetition maximum [1RM]) set with additional drop sets descending to a low-load (30% 1RM) without recovery intervals on muscle strength, endurance, and size in untrained young men. Nine untrained young men performed dumbbell curls to concentric failure 2-3 days per week for 8 weeks. Each arm was randomly assigned to one of the following three conditions: 3 sets of high-load (HL, 80% 1RM) resistance exercise, 3 sets of low-load [LL, 30% 1RM] resistance exercise, and a single high-load (SDS) set with additional drop sets descending to a low-load. The mean training time per session, including recovery intervals, was lowest in the SDS condition. Elbow flexor muscle cross-sectional area (CSA) increased similarly in all three conditions. Maximum isometric and 1RM strength of the elbow flexors increased from pre to post only in the HL and SDS conditions. Muscular endurance measured by maximum repetitions at 30% 1RM increased only in the LL and SDS conditions. A SDS resistance training program can simultaneously increase muscle CSA, strength, and endurance in untrained young men, even with lower training time compared to typical resistance exercise protocols using only high- or low-loads.

  10. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  11. Comparing maximum intercuspal contacts of virtual dental patients and mounted dental casts.

    Science.gov (United States)

    Delong, Ralph; Ko, Ching-Chang; Anderson, Gary C; Hodges, James S; Douglas, W H

    2002-12-01

    Quantitative measures of occlusal contacts are of paramount importance in the study of chewing dysfunction. A tool is needed to identify and quantify occlusal parameters without occlusal interference caused by the technique of analysis. This laboratory simulation study compared occlusal contacts constructed from 3-dimensional images of dental casts and interocclusal records with contacts found by use of conventional methods. Dental casts of 10 completely dentate adults were mounted in a semi-adjustable Denar articulator. Maximum intercuspal contacts were marked on the casts using red film. Intercuspal records made with an experimental vinyl polysiloxane impression material recorded maximum intercuspation. Three-dimensional virtual models of the casts and interocclusal records were made using custom software and an optical scanner. Contacts were calculated between virtual casts aligned manually (CM), aligned with interocclusal records scanned seated on the mandibular casts (C1) or scanned independently (C2), and directly from virtual interocclusal records (IR). Sensitivity and specificity calculations used the marked contacts as the standard. Contact parameters were compared between method pairs. Statistical comparisons used analysis of variance and the Tukey-Kramer post hoc test (P= CM/C1 = CM/C2 > C2/IR > CM/IR > C1/IR, where ">" means "closer than." Within the limits of this study, occlusal contacts calculated from aligned virtual casts accurately reproduce articulator contacts.

  12. Regional maximum rainfall analysis using L-moments at the Titicaca Lake drainage, Peru

    Science.gov (United States)

    Fernández-Palomino, Carlos Antonio; Lavado-Casimiro, Waldo Sven

    2017-08-01

    The present study investigates the application of the index flood L-moments-based regional frequency analysis procedure (RFA-LM) to the annual maximum 24-h rainfall (AM) of 33 rainfall gauge stations (RGs) to estimate rainfall quantiles at the Titicaca Lake drainage (TL). The study region was chosen because it is characterised by common floods that affect agricultural production and infrastructure. First, detailed quality analyses and verification of the RFA-LM assumptions were conducted. For this purpose, different tests for outlier verification, homogeneity, stationarity, and serial independence were employed. Then, the application of RFA-LM procedure allowed us to consider the TL as a single, hydrologically homogeneous region, in terms of its maximum rainfall frequency. That is, this region can be modelled by a generalised normal (GNO) distribution, chosen according to the Z test for goodness-of-fit, L-moments (LM) ratio diagram, and an additional evaluation of the precision of the regional growth curve. Due to the low density of RG in the TL, it was important to produce maps of the AM design quantiles estimated using RFA-LM. Therefore, the ordinary Kriging interpolation (OK) technique was used. These maps will be a useful tool for determining the different AM quantiles at any point of interest for hydrologists in the region.

  13. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    International Nuclear Information System (INIS)

    2000-01-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance

  14. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.

  15. Postexercise blood flow restriction does not enhance muscle hypertrophy induced by multiple-set high-load resistance exercise.

    Science.gov (United States)

    Madarame, Haruhiko; Nakada, Satoshi; Ohta, Takahisa; Ishii, Naokata

    2018-05-01

    To test the applicability of postexercise blood flow restriction (PEBFR) in practical training programmes, we investigated whether PEBFR enhances muscle hypertrophy induced by multiple-set high-load resistance exercise (RE). Seven men completed an eight-week RE programme for knee extensor muscles. Employing a within-subject design, one leg was subjected to RE + PEBFR, whereas contralateral leg to RE only. On each exercise session, participants performed three sets of unilateral knee extension exercise at approximately 70% of their one-repetition maximum for RE leg first, and then performed three sets for RE + PEBFR leg. Immediately after completion of the third set, the proximal portion of the RE + PEBFR leg was compressed with an air-pressure cuff for 5 min at a pressure ranging from 100 to 150 mmHg. If participants could perform 10 repetitions for three sets in two consecutive exercise sessions, the work load was increased by 5% at the next exercise session. Muscle thickness and strength of knee extensor muscles were measured before and after the eight-week training period and after the subsequent eight-week detraining period. There was a main effect of time but no condition × time interaction or main effect of condition for muscle thickness and strength. Both muscle thickness and strength increased after the training period independent of the condition. This result suggests that PEBFR would not be an effective training method at least in an early phase of adaptation to high-load resistance exercise. © 2017 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  16. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  17. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  18. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  19. Realworld maximum power point tracking simulation of PV system based on Fuzzy Logic control

    Directory of Open Access Journals (Sweden)

    Ahmed M. Othman

    2012-12-01

    Full Text Available In the recent years, the solar energy becomes one of the most important alternative sources of electric energy, so it is important to improve the efficiency and reliability of the photovoltaic (PV systems. Maximum power point tracking (MPPT plays an important role in photovoltaic power systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize their array efficiency. This paper presents a maximum power point tracker (MPPT using Fuzzy Logic theory for a PV system. The work is focused on the well known Perturb and Observe (P&O algorithm and is compared to a designed fuzzy logic controller (FLC. The simulation work dealing with MPPT controller; a DC/DC Ćuk converter feeding a load is achieved. The results showed that the proposed Fuzzy Logic MPPT in the PV system is valid.

  20. Muscle Power Is an Independent Determinant of Pain and Quality of Life in Knee Osteoarthritis.

    Science.gov (United States)

    Reid, Kieran F; Price, Lori Lyn; Harvey, William F; Driban, Jeffrey B; Hau, Cynthia; Fielding, Roger A; Wang, Chenchen

    2015-12-01

    This study examined the relationships between leg muscle strength, power, and perceived disease severity in subjects with knee osteoarthritis (OA) in order to determine whether dynamic leg extensor muscle power would be associated with pain and quality of life in knee OA. Baseline data on 190 subjects with knee OA (mean ± SD age 60.2 ± 10.4 years, body mass index 32.7 ± 7.2 kg/m(2) ) were obtained from a randomized controlled trial. Knee pain was measured using the Western Ontario and McMaster Universities Osteoarthritis Index, and health-related quality of life was assessed using the Short Form 36 (SF-36). One-repetition maximum (1RM) strength was assessed using the bilateral leg press, and peak muscle power was measured during 5 maximum voluntary velocity repetitions at 40% and 70% of 1RM. In univariate analysis, greater muscle power was significantly associated with pain (r = -0.17, P power was a significant independent predictor of pain (P ≤ 0.05) and PCS scores (P ≤ 0.04). However, muscle strength was not an independent determinant of pain or quality of life (P ≥ 0.06). Muscle power is an independent determinant of pain and quality of life in knee OA. Compared to strength, muscle power may be a more clinically important measure of muscle function within this population. New trials to systematically examine the impact of muscle power training interventions on disease severity in knee OA are particularly warranted. © 2015, American College of Rheumatology.

  1. The Independent Payment Advisory Board.

    Science.gov (United States)

    Manchikanti, Laxmaiah; Falco, Frank J E; Singh, Vijay; Benyamin, Ramsin M; Hirsch, Joshua A

    2011-01-01

    The Independent Payment Advisory Board (IPAB) is a vastly powerful component of the president's health care reform law, with authority to issue recommendations to reduce the growth in Medicare spending, providing recommendations to be considered by Congress and implemented by the administration on a fast track basis. Ever since its inception, IPAB has been one of the most controversial issues of the Patient Protection and Affordable Care Act (ACA), even though the powers of IPAB are restricted and multiple sectors of health care have been protected in the law. IPAB works by recommending policies to Congress to help Medicare provide better care at a lower cost, which would include ideas on coordinating care, getting rid of waste in the system, providing incentives for best practices, and prioritizing primary care. Congress then has the power to accept or reject these recommendations. However, Congress faces extreme limitations, either to enact policies that achieve equivalent savings, or let the Secretary of Health and Human Services (HHS) follow IPAB's recommendations. IPAB has strong supporters and opponents, leading to arguments in favor of or against to the extreme of introducing legislation to repeal IPAB. The origins of IPAB are found in the ideology of the National Institute for Health and Clinical Excellence (NICE) and the impetus of exploring health care costs, even though IPAB's authority seems to be limited to Medicare only. The structure and operation of IPAB differs from Medicare and has been called the Medicare Payment Advisory Commission (MedPAC) on steroids. The board membership consists of 15 full-time members appointed by the president and confirmed by the Senate with options for recess appointments. The IPAB statute sets target growth rates for Medicare spending. The applicable percent for maximum savings appears to be 0.5% for year 2015, 1% for 2016, 1.25% for 2017, and 1.5% for 2018 and later. The IPAB Medicare proposal process involves

  2. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  3. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  4. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  5. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  6. The Relation of Birth Order, Social Class, and Need Achievement to Independent Judgement

    Science.gov (United States)

    Rhine, W. Ray

    1974-01-01

    This article reports an investigation in which the brith order, social class, and level of achievement arousal are the variables considered when fifth and sixth-grade girls make independent judgements in performing a set task. (JH)

  7. Number of independent parameters in the potentiometric titration of humic substances.

    Science.gov (United States)

    Lenoir, Thomas; Manceau, Alain

    2010-03-16

    With the advent of high-precision automatic titrators operating in pH stat mode, measuring the mass balance of protons in solid-solution mixtures against the pH of natural and synthetic polyelectrolytes is now routine. However, titration curves of complex molecules typically lack obvious inflection points, which complicates their analysis despite the high-precision measurements. The calculation of site densities and median proton affinity constants (pK) from such data can lead to considerable covariance between fit parameters. Knowing the number of independent parameters that can be freely varied during the least-squares minimization of a model fit to titration data is necessary to improve the model's applicability. This number was calculated for natural organic matter by applying principal component analysis (PCA) to a reference data set of 47 independent titration curves from fulvic and humic acids measured at I = 0.1 M. The complete data set was reconstructed statistically from pH 3.5 to 9.8 with only six parameters, compared to seven or eight generally adjusted with common semi-empirical speciation models for organic matter, and explains correlations that occur with the higher number of parameters. Existing proton-binding models are not necessarily overparametrized, but instead titration data lack the sensitivity needed to quantify the full set of binding properties of humic materials. Model-independent conditional pK values can be obtained directly from the derivative of titration data, and this approach is the most conservative. The apparent proton-binding constants of the 23 fulvic acids (FA) and 24 humic acids (HA) derived from a high-quality polynomial parametrization of the data set are pK(H,COOH)(FA) = 4.18 +/- 0.21, pK(H,Ph-OH)(FA) = 9.29 +/- 0.33, pK(H,COOH)(HA) = 4.49 +/- 0.18, and pK(H,Ph-OH)(HA) = 9.29 +/- 0.38. Their values at other ionic strengths are more reliably calculated with the empirical Davies equation than any existing model fit.

  8. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  9. Bayesian Maximum Entropy Based Algorithm for Digital X-ray Mammogram Processing

    Directory of Open Access Journals (Sweden)

    Radu Mutihac

    2009-06-01

    Full Text Available Basics of Bayesian statistics in inverse problems using the maximum entropy principle are summarized in connection with the restoration of positive, additive images from various types of data like X-ray digital mammograms. An efficient iterative algorithm for image restoration from large data sets based on the conjugate gradient method and Lagrange multipliers in nonlinear optimization of a specific potential function was developed. The point spread function of the imaging system was determined by numerical simulations of inhomogeneous breast-like tissue with microcalcification inclusions of various opacities. The processed digital and digitized mammograms resulted superior in comparison with their raw counterparts in terms of contrast, resolution, noise, and visibility of details.

  10. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  11. Material-independent modes for electromagnetic scattering

    Science.gov (United States)

    Forestiere, Carlo; Miano, Giovanni

    2016-11-01

    In this Rapid Communication, we introduce a representation of the electromagnetic field for the analysis and synthesis of the full-wave scattering by a homogeneous dielectric object of arbitrary shape in terms of a set of eigenmodes independent of its permittivity. The expansion coefficients are rational functions of the permittivity. This approach naturally highlights the role of plasmonic and photonic modes in any scattering process and suggests a straightforward methodology to design the permittivity of the object to pursue a prescribed tailoring of the scattered field. We discuss in depth the application of the proposed approach to the analysis and design of the scattering properties of a dielectric sphere.

  12. Experimental Measurement-Device-Independent Entanglement Detection

    Science.gov (United States)

    Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed

    2015-02-01

    Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols.

  13. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  14. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  15. Microwatt power consumption maximum power point tracking circuit using an analogue differentiator for piezoelectric energy harvesting

    Science.gov (United States)

    Chew, Z. J.; Zhu, M.

    2015-12-01

    A maximum power point tracking (MPPT) scheme by tracking the open-circuit voltage from a piezoelectric energy harvester using a differentiator is presented in this paper. The MPPT controller is implemented by using a low-power analogue differentiator and comparators without the need of a sensing circuitry and a power hungry controller. This proposed MPPT circuit is used to control a buck converter which serves as a power management module in conjunction with a full-wave bridge diode rectifier. Performance of this MPPT control scheme is verified by using the prototyped circuit to track the maximum power point of a macro-fiber composite (MFC) as the piezoelectric energy harvester. The MFC was bonded on a composite material and the whole specimen was subjected to various strain levels at frequency from 10 to 100 Hz. Experimental results showed that the implemented full analogue MPPT controller has a tracking efficiency between 81% and 98.66% independent of the load, and consumes an average power of 3.187 μW at 3 V during operation.

  16. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  17. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  18. Construction of state-independent proofs for quantum contextuality

    Science.gov (United States)

    Tang, Weidong; Yu, Sixia

    2017-12-01

    Since the enlightening proofs of quantum contextuality first established by Kochen and Specker, and also by Bell, various simplified proofs have been constructed to exclude the noncontextual hidden variable theory of our nature at the microscopic scale. The conflict between the noncontextual hidden variable theory and quantum mechanics is commonly revealed by Kochen-Specker sets of yes-no tests, represented by projectors (or rays), via either logical contradictions or noncontextuality inequalities in a state-(in)dependent manner. Here we propose a systematic and programmable construction of a state-independent proof from a given set of nonspecific rays in C3 according to their Gram matrix. This approach brings us a greater convenience in the experimental arrangements. Besides, our proofs in C3 can also be generalized to any higher-dimensional systems by a recursive method.

  19. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  20. Evaluation of regulatory variation and theoretical health risk for pesticide maximum residue limits in food.

    Science.gov (United States)

    Li, Zijian

    2018-08-01

    To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. On the stability and maximum mass of differentially rotating relativistic stars

    Science.gov (United States)

    Weih, Lukas R.; Most, Elias R.; Rezzolla, Luciano

    2018-01-01

    The stability properties of rotating relativistic stars against prompt gravitational collapse to a black hole are rather well understood for uniformly rotating models. This is not the case for differentially rotating neutron stars, which are expected to be produced in catastrophic events such as the merger of binary system of neutron stars or the collapse of a massive stellar core. We consider sequences of differentially rotating equilibrium models using the j-constant law and by combining them with their dynamical evolution, we show that a sufficient stability criterion for differentially rotating neutron stars exists similar to the one of their uniformly rotating counterparts. Namely: along a sequence of constant angular momentum, a dynamical instability sets in for central rest-mass densities slightly below the one of the equilibrium solution at the turning point. In addition, following Breu & Rezzolla, we show that 'quasi-universal' relations can be found when calculating the turning-point mass. In turn, this allows us to compute the maximum mass allowed by differential rotation, Mmax,dr, in terms of the maximum mass of the non-rotating configuration, M_{_TOV}, finding that M_{max, dr} ˜eq (1.54 ± 0.05) M_{_TOV} for all the equations of state we have considered.

  2. Gravitational wave chirp search: no-signal cumulative distribution of the maximum likelihood detection statistic

    International Nuclear Information System (INIS)

    Croce, R P; Demma, Th; Longo, M; Marano, S; Matta, V; Pierro, V; Pinto, I M

    2003-01-01

    The cumulative distribution of the supremum of a set (bank) of correlators is investigated in the context of maximum likelihood detection of gravitational wave chirps from coalescing binaries with unknown parameters. Accurate (lower-bound) approximants are introduced based on a suitable generalization of previous results by Mohanty. Asymptotic properties (in the limit where the number of correlators goes to infinity) are highlighted. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a Gaussian correlation inequality

  3. Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization

    KAUST Repository

    Terzariol, Marco

    2017-11-13

    The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.

  4. Autogoverno, Regulação, Função Normativa e Independência Interna no Judiciário / Self-Government, Regulatory Power and Judicial Independence

    Directory of Open Access Journals (Sweden)

    André Melo Gomes Pereira

    2016-10-01

    Full Text Available Purpose – This paper focuses on the relationship between the performed normative function and the judges’ internal independence, often by general and abstract commands, for self-government agencies with regulation functions in the Judiciary. Methodology/approach/design – This study implements analyses of standards and regulation literature, normative function, self-government and judicial independence. Illustratively, courts’ decisions on specific cases were analyzed. Special attention was given to the theoretical bases of regulation, the normative function of government agencies and to the democratization proposal of judicial self-government, a model notedly proposed by Zaffaroni. Findings – Self-government implies regulation. Regulation involves the exercise of normative function. Internal democratization of judicial self-government and participation of all regulated agents in the Judiciary are necessary tools to ensure legitimacy and the internal independence for the exercise of normative functions and the whole set of activities put forward by self-government agencies. Practical implications – The paper discusses a change in the institutional design of self-government in the Judiciary and the limits imposed by its the normative function. Originality/value – It correlates the regulatory function developed by self-government agencies with the assurance of judges’ internal independence.

  5. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  6. Independent attacks in imperfect settings: A case for a two-way quantum key distribution scheme

    International Nuclear Information System (INIS)

    Shaari, J.S.; Bahari, Iskandar

    2010-01-01

    We review the study on a two-way quantum key distribution protocol given imperfect settings through a simple analysis of a toy model and show that it can outperform a BB84 setup. We provide the sufficient condition for this as a ratio of optimal intensities for the protocols.

  7. Utility Independent Privacy Preserving Data Mining - Horizontally Partitioned Data

    Directory of Open Access Journals (Sweden)

    E Poovammal

    2010-06-01

    Full Text Available Micro data is a valuable source of information for research. However, publishing data about individuals for research purposes, without revealing sensitive information, is an important problem. The main objective of privacy preserving data mining algorithms is to obtain accurate results/rules by analyzing the maximum possible amount of data without unintended information disclosure. Data sets for analysis may be in a centralized server or in a distributed environment. In a distributed environment, the data may be horizontally or vertically partitioned. We have developed a simple technique by which horizontally partitioned data can be used for any type of mining task without information loss. The partitioned sensitive data at 'm' different sites are transformed using a mapping table or graded grouping technique, depending on the data type. This transformed data set is given to a third party for analysis. This may not be a trusted party, but it is still allowed to perform mining operations on the data set and to release the results to all the 'm' parties. The results are interpreted among the 'm' parties involved in the data sharing. The experiments conducted on real data sets prove that our proposed simple transformation procedure preserves one hundred percent of the performance of any data mining algorithm as compared to the original data set while preserving privacy.

  8. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  9. Maximum entropy based reconstruction of soft X ray emissivity profiles in W7-AS

    International Nuclear Information System (INIS)

    Ertl, K.; Linden, W. von der; Dose, V.; Weller, A.

    1996-01-01

    The reconstruction of 2-D emissivity profiles from soft X ray tomography measurements constitutes a highly underdetermined and ill-posed inversion problem, because of the restricted viewing access, the number of chords and the increased noise level in most plasma devices. An unbiased and consistent probabilistic approach within the framework of Bayesian inference is provided by the maximum entropy method, which is independent of model assumptions, but allows any prior knowledge available to be incorporated. The formalism is applied to the reconstruction of emissivity profiles in an NBI heated plasma discharge to determine the dependence of the Shafranov shift on β, the reduction of which was a particular objective in designing the advanced W7-AS stellarator. (author). 40 refs, 7 figs

  10. 40 CFR 141.62 - Maximum contaminant levels for inorganic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for inorganic contaminants. 141.62 Section 141.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Water Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.62 Maximum...

  11. Cell-cycle regulation of non-enzymatic functions of the Drosophila methyltransferase PR-Set7.

    Science.gov (United States)

    Zouaz, Amel; Fernando, Céline; Perez, Yannick; Sardet, Claude; Julien, Eric; Grimaud, Charlotte

    2018-04-06

    Tight cell-cycle regulation of the histone H4-K20 methyltransferase PR-Set7 is essential for the maintenance of genome integrity. In mammals, this mainly involves the interaction of PR-Set7 with the replication factor PCNA, which triggers the degradation of the enzyme by the CRL4CDT2 E3 ubiquitin ligase. PR-Set7 is also targeted by the SCFβ-TRCP ligase, but the role of this additional regulatory pathway remains unclear. Here, we show that Drosophila PR-Set7 undergoes a cell-cycle proteolytic regulation, independently of its interaction with PCNA. Instead, Slimb, the ortholog of β-TRCP, is specifically required for the degradation of the nuclear pool of PR-Set7 prior to S phase. Consequently, inactivation of Slimb leads to nuclear accumulation of PR-Set7, which triggers aberrant chromatin compaction and G1/S arrest. Strikingly, these phenotypes result from non-enzymatic PR-Set7 functions that prevent proper histone H4 acetylation independently of H4K20 methylation. Altogether, these results identify the Slimb-mediated PR-Set7 proteolysis as a new critical regulatory mechanism required for proper interphase chromatin organization at G1/S transition.

  12. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  13. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  14. 40 CFR 141.61 - Maximum contaminant levels for organic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for organic contaminants. 141.61 Section 141.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.61 Maximum contaminant...

  15. Probabilistic deletion of copies of linearly independent quantum states

    International Nuclear Information System (INIS)

    Feng Jian; Gao Yunfeng; Wang Jisuo; Zhan Mingsheng

    2002-01-01

    We show that each of two copies of the nonorthogonal states randomly selected from a certain set S can be probabilistically deleted by a general unitary-reduction operation if and only if the states are linearly independent. We derive a tight bound on the best possible deleting efficiencies. These results for 2→1 probabilistic deleting are also generalized into the case of N→M deleting (N,M positive integers and N>M)

  16. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  17. Performance comparison of machine learning algorithms and number of independent components used in fMRI decoding of belief vs. disbelief.

    Science.gov (United States)

    Douglas, P K; Harris, Sam; Yuille, Alan; Cohen, Mark S

    2011-05-15

    Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. The Impact of Problem Sets on Student Learning

    Science.gov (United States)

    Kim, Myeong Hwan; Cho, Moon-Heum; Leonard, Karen Moustafa

    2012-01-01

    The authors examined the role of problem sets on student learning in university microeconomics. A total of 126 students participated in the study in consecutive years. independent samples t test showed that students who were not given answer keys outperformed students who were given answer keys. Multiple regression analysis showed that, along with…

  19. International urodynamic basic spinal cord injury data set.

    Science.gov (United States)

    Biering-Sørensen, F; Craggs, M; Kennelly, M; Schick, E; Wyndaele, J-J

    2008-07-01

    To create the International Urodynamic Basic Spinal Cord Injury (SCI) Data Set within the framework of the International SCI Data Sets. International working group. The draft of the data set was developed by a working group consisting of members appointed by the Neurourology Committee of the International Continence Society, the European Association of Urology, the American Spinal Injury Association (ASIA), the International Spinal Cord Society (ISCoS) and a representative of the Executive Committee of the International SCI Standards and Data Sets. The final version of the data set was developed after review and comments by members of the Executive Committee of the International SCI Standards and Data Sets, the ISCoS Scientific Committee, ASIA Board, relevant and interested (international) organizations and societies (around 40) and persons and the ISCoS Council. Endorsement of the data set by relevant organizations and societies will be obtained. To make the data set uniform, each variable and each response category within each variable have been specifically defined in a way that is designed to promote the collection and reporting of comparable minimal data. Variables included in the International Urodynamic Basic SCI Data Set are date of data collection, bladder sensation during filling cystometry, detrusor function, compliance during filing cystometry, function during voiding, detrusor leak point pressure, maximum detrusor pressure, cystometric bladder capacity and post-void residual volume.

  20. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    Science.gov (United States)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  1. Independence and Product Systems

    OpenAIRE

    Skeide, Michael

    2003-01-01

    Starting from elementary considerations about independence and Markov processes in classical probability we arrive at the new concept of conditional monotone independence (or operator-valued monotone independence). With the help of product systems of Hilbert modules we show that monotone conditional independence arises naturally in dilation theory.

  2. Introduction to axiomatic set theory

    CERN Document Server

    Takeuti, Gaisi

    1971-01-01

    In 1963, the first author introduced a course in set theory at the Uni­ versity of Illinois whose main objectives were to cover G6del's work on the consistency of the axiom of choice (AC) and the generalized con­ tinuum hypothesis (GCH), and Cohen's work on the independence of AC and the GCH. Notes taken in 1963 by the second author were the taught by him in 1966, revised extensively, and are presented here as an introduction to axiomatic set theory. Texts in set theory frequently develop the subject rapidly moving from key result to key result and suppressing many details. Advocates of the fast development claim at least two advantages. First, key results are highlighted, and second, the student who wishes to master the sub­ ject is compelled to develop the details on his own. However, an in­ structor using a "fast development" text must devote much class time to assisting his students in their efforts to bridge gaps in the text. We have chosen instead a development that is quite detailed and complete. F...

  3. Confluence or independence of microwave plasma bullets in atmospheric argon plasma jet plumes

    Science.gov (United States)

    Li, Ping; Chen, Zhaoquan; Mu, Haibao; Xu, Guimin; Yao, Congwei; Sun, Anbang; Zhou, Yuming; Zhang, Guanjun

    2018-03-01

    Plasma bullet is the formation and propagation of a guided ionization wave (streamer), normally generated in atmospheric pressure plasma jet (APPJ). In most cases, only an ionization front produces in a dielectric tube. The present study shows that two or three ionization fronts can be generated in a single quartz tube by using a microwave coaxial resonator. The argon APPJ plumes with a maximum length of 170 mm can be driven by continuous microwaves or microwave pulses. When the input power is higher than 90 W, two or three ionization fronts propagate independently at first; thereafter, they confluence to form a central plasma jet plume. On the other hand, the plasma bullets move independently as the lower input power is applied. For pulsed microwave discharges, the discharge images captured by a fast camera show the ionization process in detail. Another interesting finding is that the strongest lightening plasma jet plumes always appear at the shrinking phase. Both the discharge images and electromagnetic simulations suggest that the confluence or independent propagation of plasma bullets is resonantly excited by the local enhanced electric fields, in terms of wave modes of traveling surface plasmon polaritons.

  4. Maximum distance between the Leader and the Laggard for three Brownian walkers

    International Nuclear Information System (INIS)

    Majumdar, Satya N; Bray, Alan J

    2010-01-01

    We consider three independent Brownian walkers moving on a line. The process terminates when the leftmost walker (the 'Leader') meets either of the other two walkers. For arbitrary values of the diffusion constants D 1 (the Leader), D 2 and D 3 of the three walkers, we compute the probability distribution P(m|y 2 , y 3 ) of the maximum distance m between the Leader and the current rightmost particle (the 'Laggard') during the process, where y 2 and y 3 are the initial distances between the Leader and the other two walkers. The result has, for large m, the form P(m|y 2 , y 3 ) ∼ A(y 2 , y 3 )m −δ , where δ = (2π − θ)/(π − θ) and θ= cos -1 (D 1 /√((D 1 +D 2 )(D 1 +D 3 ))). The amplitude A(y 2 , y 3 ) is also determined exactly

  5. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  6. Absorption and scattering coefficients estimation in two-dimensional participating media using the generalized maximum entropy and Levenberg-Marquardt methods

    International Nuclear Information System (INIS)

    Berrocal T, Mariella J.; Roberty, Nilson C.; Silva Neto, Antonio J.; Universidade Federal, Rio de Janeiro, RJ

    2002-01-01

    The solution of inverse problems in participating media where there is emission, absorption and dispersion of the radiation possesses several applications in engineering and medicine. The objective of this work is to estimative the coefficients of absorption and dispersion in two-dimensional heterogeneous participating media, using in independent form the Generalized Maximum Entropy and Levenberg Marquardt methods. Both methods are based on the solution of the direct problem that is modeled by the Boltzmann equation in cartesian geometry. Some cases testes are presented. (author)

  7. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  8. Simulation of the maximum yield of sugar cane at different altitudes: effect of temperature on the conversion of radiation into biomass

    International Nuclear Information System (INIS)

    Martine, J.F.; Siband, P.; Bonhomme, R.

    1999-01-01

    To minimize the production costs of sugar cane, for the diverse sites of production found in La Réunion, an improved understanding of the influence of temperature on the dry matter radiation quotient is required. Existing models simulate poorly the temperature-radiation interaction. A model of sugar cane growth has been fitted to the results from two contrasting sites (mean temperatures: 14-30 °C; total radiation: 10-25 MJ·m -2 ·d -1 ), on a ratoon crop of cv R570, under conditions of non-limiting resources. Radiation interception, aerial biomass, the fraction of millable stems, and their moisture content, were measured. The time-courses of the efficiency of radiation interception differed between sites. As a function of the sum of day-degrees, they were similar. The dry matter radiation quotient was related to temperature. The moisture content of millable stems depended on the day-degree sum. On the other hand, the leaf/stem ratio was independent of temperature. The relationships established enabled the construction of a simple model of yield potential. Applied to a set of sites representing the sugar cane growing area of La Réunion, it gave a good prediction of maximum yields. (author) [fr

  9. Malnutrition is independently associated with skin tears in hospital inpatient setting-Findings of a 6-year point prevalence audit.

    Science.gov (United States)

    Munro, Emma L; Hickling, Donna F; Williams, Damian M; Bell, Jack J

    2018-05-24

    Skin tears cause pain, increased length of stay, increased costs, and reduced quality of life. Minimal research reports the association between skin tears, and malnutrition using robust measures of nutritional status. This study aimed to articulate the association between malnutrition and skin tears in hospital inpatients using a yearly point prevalence of inpatients included in the Queensland Patient Safety Bedside Audit, malnutrition audits and skin tear audits conducted at a metropolitan tertiary hospital between 2010 and 2015. Patients were excluded if admitted to mental health wards or were <18 years. A total of 2197 inpatients were included, with a median age of 71 years. The overall prevalence of skin tears was 8.1%. Malnutrition prevalence was 33.5%. Univariate analysis demonstrated associations between age (P ˂ .001), body mass index (BMI) (P < .001) and malnutrition (P ˂ .001) but not gender (P = .319). Binomial logistic regression analysis modelling demonstrated that malnutrition diagnosed using the Subjective Global Assessment was independently associated with skin tear incidence (odds ratio, OR: 1.63; 95% confidence interval, CI: 1.13-2.36) and multiple skin tears (OR 2.48 [95% CI 1.37-4.50]). BMI was not independently associated with skin tears or multiple skin tears. This study demonstrated independent associations between malnutrition and skin tear prevalence and multiple skin tears. It also demonstrated the limitations of BMI as a nutritional assessment measure. © 2018 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  10. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  11. Memory Retrieval Given Two Independent Cues: Cue Selection or Parallel Access?

    Science.gov (United States)

    Rickard, Timothy C.; Bajic, Daniel

    2004-01-01

    A basic but unresolved issue in the study of memory retrieval is whether multiple independent cues can be used concurrently (i.e., in parallel) to recall a single, common response. A number of empirical results, as well as potentially applicable theories, suggest that retrieval can proceed in parallel, though Rickard (1997) set forth a model that…

  12. Applications and Benefits for Big Data Sets Using Tree Distances and The T-SNE Algorithm

    Science.gov (United States)

    2016-03-01

    BENEFITS FOR BIG DATA SETS USING TREE DISTANCES AND THE T-SNE ALGORITHM by Suyoung Lee March 2016 Thesis Advisor: Samuel E. Buttrey...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE APPLICATIONS AND BENEFITS FOR BIG DATA SETS USING TREE DISTANCES AND THE T-SNE...public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words ) Modern data sets often consist of unstructured data

  13. Assessment of Bearing Capacity and Stiffness in New Steel Sets Used for Roadway Support in Coal Mines

    Directory of Open Access Journals (Sweden)

    Renshu Yang

    2017-10-01

    Full Text Available There is high demand for roadway support in coal mines for the swelling soft rocks. As high strength steel sets can be taken as an effective alternative to control large deformation in this type of rocks, based on an original set, three new sets, including a floor beam set, a roof and floor beams set, and a roof and floor beams and braces set, are proposed in this research. In order to examine the strengths of new sets, four scaled sets of one original set, and three new sets, have been manufactured and tested in loading experiments. Results indicated that three new sets all exhibited higher strength than the original set. In experiments, the roof beam in set plays a significant effect on top arch strengthening, while the floor beam plays significant effect on bottom arch strengthening. The maximum bearing capacity and stiffness of the top arch with roof beam are increased to 1.63 times and 3.06 times of those in the original set, and the maximum bearing capacity and stiffness of the bottom arch with floor beam are increased to 1.44 times and 3.55 times of those in original set. Based on the roof and floor beams, two more braces in the bottom arch also play a significant effect in bottom corners strengthening, but extra braces play little role in top arch strengthening. These new sets provide more choices for roadway support in swelling soft rocks.

  14. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  15. Relay protection coordination with generator capability curve, excitation system limiters and power system relay protections settings

    OpenAIRE

    Buha Danilo; Buha Boško; Jačić Dušan; Gligorov Saša; Božilov Marko; Marinković Savo; Milosavljević Srđan

    2016-01-01

    The relay protection settings performed in the largest thermal powerplant (TE "Nikola Tesla B") are reffered and explained in this paper. The first calculation step is related to the coordination of the maximum stator current limiter settings, the overcurrent protection with inverse characteristics settings and the permitted overload of the generator stator B1. In the second calculation step the settings of impedance generator protection are determined, and the methods and criteria according ...

  16. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  17. Labor epidural analgesia is independent risk factor for neonatal pyrexia.

    Science.gov (United States)

    Agakidis, Charalampos; Agakidou, Eleni; Philip Thomas, Sumesh; Murthy, Prashanth; John Lloyd, David

    2011-09-01

    To explore whether epidural analgesia (EA) in labor is independent risk factor for neonatal pyrexia after controlling for intrapartum pyrexia. Retrospective observational study of 480 consecutive term singleton infants born to mothers who received EA in labor (EA group) and 480 term infants delivered to mothers who did not receive EA (NEA group). Mothers in the EA group had significantly higher incidence of intrapartum pyrexia [54/480 (11%) vs. 4/480 (0.8%), OR = 15.1, p neonatal pyrexia [68/480 (14.2%) vs. 15/480 (3.1%), OR = 5.1, p Neonates in the EA group had a median duration of pyrexia of 1 h (maximum 5 h) with a peak temperature within 1 h. Stepwise logistic regression analysis showed that maternal EA was independent risk factor for neonatal pyrexia (>37.5°C) after controlling for intrapartum pyrexia (>37.9°C) and other confounders (OR = 3.44, CI = 1.9-6.3, p neonates. It is unnecessary to investigate febrile offspring of mothers who have had epidurals unless pyrexia persists for longer than 5 h or other signs or risk factors for neonatal sepsis are present.

  18. The 1990 conterminous U.S. AVHRR data set

    International Nuclear Information System (INIS)

    Eidenshink, J.C.

    1992-01-01

    The U.S. Geological Survey, using NOAA-11 Advanced Very High Resolution Radiometer (AVHRR) 1-km data, has produced a time series of 19 biweekly maximum normalized difference vegetation index (NDVI) composites of the conterminous United States for the 1990 growing season. Each biweekly composite included data from approximately 20 calibrated and georegistered daily overpasses. The output is a data set which includes all five calibrated AVHRR channels, NDVI values, three satellite/solar viewing angles, and date of observation pointer for each biweekly composite. The data set is intended for assessing seasonal variations in vegetation condition and provides a foundation for studying long-term changes in vegetation resulting from human interactions or global climate alterations. 12 refs

  19. Organizing Independent Student Work

    Directory of Open Access Journals (Sweden)

    Zhadyra T. Zhumasheva

    2015-03-01

    Full Text Available This article addresses issues in organizing independent student work. The author defines the term “independence”, discusses the concepts of independent learner work and independent learner work under the guidance of an instructor, proposes a classification of assignments to be done independently, and provides methodological recommendations as to the organization of independent student work. The article discusses the need for turning the student from a passive consumer of knowledge into an active creator of it, capable of formulating a problem, analyzing the ways of solving it, coming up with an optimum outcome, and proving its correctness. The preparation of highly qualified human resources is the primary condition for boosting Kazakhstan’s competitiveness. Independent student work is a means of fostering the professional competence of future specialists. The primary form of self-education is independent work.

  20. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  1. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2017-01-01

    Full Text Available A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due date. The objective is to maximize the multiplication of their rational positive cooperative profits. A division of those jobs should be negotiated to yield a reasonable cooperative profit allocation scheme acceptable to them. We propose the sufficient and necessary conditions for the problems to have positive integer solution.

  2. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    Science.gov (United States)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  3. Mean glucose level is not an independent risk factor for mortality in mixed ICU patients

    NARCIS (Netherlands)

    Ligtenberg, JJM; Meijering, S; Stienstra, Y; van der Horst, ICC; Vogelzang, M; Nijsten, MWN; Tulleken, JE; Zijlstra, JG

    Objective: To find out if there is an association between hyperglycaemia and mortality in mixed ICU patients. Design and setting: Retrospective cohort study over a 2-year period at the medical ICU of a university hospital. Measurements: Admission glucose, maximum and mean glucose, length of stay,

  4. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  5. Systems-based biological concordance and predictive reproducibility of gene set discovery methods in cardiovascular disease.

    Science.gov (United States)

    Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying

    2011-08-01

    The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. AN ANALYSIS OF TEN YEARS OF THE FOUR GRAND SLAM MEN'S SINGLES DATA FOR LACK OF INDEPENDENCE OF SET OUTCOMES

    Directory of Open Access Journals (Sweden)

    Denny Meyer

    2006-12-01

    Full Text Available The objective of this paper is to use data from the highest level in men's tennis to assess whether there is any evidence to reject the hypothesis that the two players in a match have a constant probability of winning each set in the match. The data consists of all 4883 matches of grand slam men's singles over a 10 year period from 1995 to 2004. Each match is categorised by its sequence of win (W or loss (L (in set 1, set 2, set 3,... to the eventual winner. Thus, there are several categories of matches from WWW to LLWWW. The methodology involves fitting several probabilistic models to the frequencies of the above ten categories. One four-set category is observed to occur significantly more often than the other two. Correspondingly, a couple of the five-set categories occur more frequently than the others. This pattern is consistent when the data is split into two five-year subsets. The data provides significant statistical evidence that the probability of winning a set within a match varies from set to set. The data supports the conclusion that, at the highest level of men's singles tennis, the better player (not necessarily the winner lifts his play in certain situations at least some of the time

  7. Independent colimitation for carbon dioxide and inorganic phosphorus.

    Directory of Open Access Journals (Sweden)

    Elly Spijkerman

    Full Text Available Simultaneous limitation of plant growth by two or more nutrients is increasingly acknowledged as a common phenomenon in nature, but its cellular mechanisms are far from understood. We investigated the uptake kinetics of CO(2 and phosphorus of the algae Chlamydomonas acidophila in response to growth at limiting conditions of CO(2 and phosphorus. In addition, we fitted the data to four different Monod-type models: one assuming Liebigs Law of the minimum, one assuming that the affinity for the uptake of one nutrient is not influenced by the supply of the other (independent colimitation and two where the uptake affinity for one nutrient depends on the supply of the other (dependent colimitation. In addition we asked whether the physiological response under colimitation differs from that under single nutrient limitation.We found no negative correlation between the affinities for uptake of the two nutrients, thereby rejecting a dependent colimitation. Kinetic data were supported by a better model fit assuming independent uptake of colimiting nutrients than when assuming Liebigs Law of the minimum or a dependent colimitation. Results show that cell nutrient homeostasis regulated nutrient acquisition which resulted in a trade-off in the maximum uptake rates of CO(2 and phosphorus, possibly driven by space limitation on the cell membrane for porters for the different nutrients. Hence, the response to colimitation deviated from that to a single nutrient limitation. In conclusion, responses to single nutrient limitation cannot be extrapolated to situations where multiple nutrients are limiting, which calls for colimitation experiments and models to properly predict growth responses to a changing natural environment. These deviations from single nutrient limitation response under colimiting conditions and independent colimitation may also hold for other nutrients in algae and in higher plants.

  8. Independent Colimitation for Carbon Dioxide and Inorganic Phosphorus

    Science.gov (United States)

    Spijkerman, Elly; de Castro, Francisco; Gaedke, Ursula

    2011-01-01

    Simultaneous limitation of plant growth by two or more nutrients is increasingly acknowledged as a common phenomenon in nature, but its cellular mechanisms are far from understood. We investigated the uptake kinetics of CO2 and phosphorus of the algae Chlamydomonas acidophila in response to growth at limiting conditions of CO2 and phosphorus. In addition, we fitted the data to four different Monod-type models: one assuming Liebigs Law of the minimum, one assuming that the affinity for the uptake of one nutrient is not influenced by the supply of the other (independent colimitation) and two where the uptake affinity for one nutrient depends on the supply of the other (dependent colimitation). In addition we asked whether the physiological response under colimitation differs from that under single nutrient limitation. We found no negative correlation between the affinities for uptake of the two nutrients, thereby rejecting a dependent colimitation. Kinetic data were supported by a better model fit assuming independent uptake of colimiting nutrients than when assuming Liebigs Law of the minimum or a dependent colimitation. Results show that cell nutrient homeostasis regulated nutrient acquisition which resulted in a trade-off in the maximum uptake rates of CO2 and phosphorus, possibly driven by space limitation on the cell membrane for porters for the different nutrients. Hence, the response to colimitation deviated from that to a single nutrient limitation. In conclusion, responses to single nutrient limitation cannot be extrapolated to situations where multiple nutrients are limiting, which calls for colimitation experiments and models to properly predict growth responses to a changing natural environment. These deviations from single nutrient limitation response under colimiting conditions and independent colimitation may also hold for other nutrients in algae and in higher plants. PMID:22145031

  9. Independent colimitation for carbon dioxide and inorganic phosphorus.

    Science.gov (United States)

    Spijkerman, Elly; de Castro, Francisco; Gaedke, Ursula

    2011-01-01

    Simultaneous limitation of plant growth by two or more nutrients is increasingly acknowledged as a common phenomenon in nature, but its cellular mechanisms are far from understood. We investigated the uptake kinetics of CO(2) and phosphorus of the algae Chlamydomonas acidophila in response to growth at limiting conditions of CO(2) and phosphorus. In addition, we fitted the data to four different Monod-type models: one assuming Liebigs Law of the minimum, one assuming that the affinity for the uptake of one nutrient is not influenced by the supply of the other (independent colimitation) and two where the uptake affinity for one nutrient depends on the supply of the other (dependent colimitation). In addition we asked whether the physiological response under colimitation differs from that under single nutrient limitation.We found no negative correlation between the affinities for uptake of the two nutrients, thereby rejecting a dependent colimitation. Kinetic data were supported by a better model fit assuming independent uptake of colimiting nutrients than when assuming Liebigs Law of the minimum or a dependent colimitation. Results show that cell nutrient homeostasis regulated nutrient acquisition which resulted in a trade-off in the maximum uptake rates of CO(2) and phosphorus, possibly driven by space limitation on the cell membrane for porters for the different nutrients. Hence, the response to colimitation deviated from that to a single nutrient limitation. In conclusion, responses to single nutrient limitation cannot be extrapolated to situations where multiple nutrients are limiting, which calls for colimitation experiments and models to properly predict growth responses to a changing natural environment. These deviations from single nutrient limitation response under colimiting conditions and independent colimitation may also hold for other nutrients in algae and in higher plants.

  10. P.L. 110-140, "Energy Independence and Security Act of 2007", 2007

    Energy Technology Data Exchange (ETDEWEB)

    None

    2007-12-19

    The Energy Independence and Security Act of 2007 (EISA), signed into law on December 19, 2007, set forth an agenda for improving U.S. energy security across the entire economy. While industrial energy efficiency is specifically called out in Title IV, Subtitle D, other EISA provisions also apply to AMO activities.

  11. Probabilistic maximum-value wind prediction for offshore environments

    DEFF Research Database (Denmark)

    Staid, Andrea; Pinson, Pierre; Guikema, Seth D.

    2015-01-01

    statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...

  12. Uncovering the hidden risk architecture of the schizophrenias: confirmation in three independent genome-wide association studies.

    Science.gov (United States)

    Arnedo, Javier; Svrakic, Dragan M; Del Val, Coral; Romero-Zaliz, Rocío; Hernández-Cuervo, Helena; Fanous, Ayman H; Pato, Michele T; Pato, Carlos N; de Erausquin, Gabriel A; Cloninger, C Robert; Zwir, Igor

    2015-02-01

    The authors sought to demonstrate that schizophrenia is a heterogeneous group of heritable disorders caused by different genotypic networks that cause distinct clinical syndromes. In a large genome-wide association study of cases with schizophrenia and controls, the authors first identified sets of interacting single-nucleotide polymorphisms (SNPs) that cluster within particular individuals (SNP sets) regardless of clinical status. Second, they examined the risk of schizophrenia for each SNP set and tested replicability in two independent samples. Third, they identified genotypic networks composed of SNP sets sharing SNPs or subjects. Fourth, they identified sets of distinct clinical features that cluster in particular cases (phenotypic sets or clinical syndromes) without regard for their genetic background. Fifth, they tested whether SNP sets were associated with distinct phenotypic sets in a replicable manner across the three studies. The authors identified 42 SNP sets associated with a 70% or greater risk of schizophrenia, and confirmed 34 (81%) or more with similar high risk of schizophrenia in two independent samples. Seventeen networks of SNP sets did not share any SNP or subject. These disjoint genotypic networks were associated with distinct gene products and clinical syndromes (i.e., the schizophrenias) varying in symptoms and severity. Associations between genotypic networks and clinical syndromes were complex, showing multifinality and equifinality. The interactive networks explained the risk of schizophrenia more than the average effects of all SNPs (24%). Schizophrenia is a group of heritable disorders caused by a moderate number of separate genotypic networks associated with several distinct clinical syndromes.

  13. Maximum Diameter and Number of Tumors as a New Prognostic Indicator of Colorectal Liver Metastases.

    Science.gov (United States)

    Yoshimoto, Toshiaki; Morine, Yuji; Imura, Satoru; Ikemoto, Tetsuya; Iwahashi, Syuichi; Saito, Y U; Yamada, Sinichiro; Ishikawa, Daichi; Teraoku, Hiroki; Yoshikawa, Masato; Higashijima, Jun; Takasu, Chie; Shimada, Mitsuo

    2017-01-01

    Surgical resection is currently considered the only potentially curative option as a treatment strategy of colorectal liver metastases (CRLM). However, the criteria for selection of resectable CRLM are not clear. The aim of this study was to confirm a new prognostic indicator of CRLM after hepatic resection. One hundred thirty nine patients who underwent initial surgical resection from 1994 to 2015 were investigated retrospectively. Prognostic factors of overall survival including the product of maximum diameter and number of metastases (MDN) were analyzed. Primary tumor differentiation, vessel invasion, lymph node (LN) metastasis, non-optimally resectable metastases, H score, grade of liver metastases, resection with non-curative intent and MDN were found to be prognostic factors of overall survival (OS). In multivariate analyses of clinicopathological features associated with OS, MDN and non-curative intent were independent prognostic factors. Patients with MDN ≥30 had shown significantly poorer prognosis than patients with MDN <30 in OS and relapse-free survival (RFS). MDN ≥30 is an independent prognostic factor of survival in patients with CRLM and optimal surgical criterion of hepatectomy for CRLM. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  14. Attitude and Perception of Young Audience towards Patriotism in Independence Day TV Commercials

    Directory of Open Access Journals (Sweden)

    Fazlina Jaafar

    2016-01-01

    Full Text Available Quantitatively, this study intends to identify the attitude and perception of young audiences in Malaysia towards Independence day TV commercials from Petronas, and Maxis Berhad in celebrating the Independence Day. Firstly, respondents were exposed to three TV commercials with the same theme and purpose – to represent the spirit of patriotism. Later, these respondents were given a set of questionnaire to be filled. Data were collected using Purposive Sampling and analyzed with statistical analysis (SPSS using Descriptive analysis represented by using the value of percentages, X, and SD. Findings of this study revealed that respondents have negative attitudes towards independence day, but positive attitudes towards the patriotism showed in all TV commercials. They also showed positive perception on independence day television commercials, as high number of respondents have agreed that the concept, theme and art direction of television commercial about love and live in unity without racism will be the style of benchmark for the future style of direction towards producing television commercial for Independence Day and are considered vital to instill patriotism towards the nation.

  15. Inferring phylogenetic networks by the maximum parsimony criterion: a case study.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-01

    Horizontal gene transfer (HGT) may result in genes whose evolutionary histories disagree with each other, as well as with the species tree. In this case, reconciling the species and gene trees results in a network of relationships, known as the "phylogenetic network" of the set of species. A phylogenetic network that incorporates HGT consists of an underlying species tree that captures vertical inheritance and a set of edges which model the "horizontal" transfer of genetic material. In a series of papers, Nakhleh and colleagues have recently formulated a maximum parsimony (MP) criterion for phylogenetic networks, provided an array of computationally efficient algorithms and heuristics for computing it, and demonstrated its plausibility on simulated data. In this article, we study the performance and robustness of this criterion on biological data. Our findings indicate that MP is very promising when its application is extended to the domain of phylogenetic network reconstruction and HGT detection. In all cases we investigated, the MP criterion detected the correct number of HGT events required to map the evolutionary history of a gene data set onto the species phylogeny. Furthermore, our results indicate that the criterion is robust with respect to both incomplete taxon sampling and the use of different site substitution matrices. Finally, our results show that the MP criterion is very promising in detecting HGT in chimeric genes, whose evolutionary histories are a mix of vertical and horizontal evolution. Besides the performance analysis of MP, our findings offer new insights into the evolution of 4 biological data sets and new possible explanations of HGT scenarios in their evolutionary history.

  16. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  17. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  18. An impossibility theorem for parameter independent hidden variable theories

    Science.gov (United States)

    Leegwater, Gijs

    2016-05-01

    Recently, Roger Colbeck and Renato Renner (C&R) have claimed that '[n]o extension of quantum theory can have improved predictive power' (Colbeck & Renner, 2011, 2012b). If correct, this is a spectacular impossibility theorem for hidden variable theories, which is more general than the theorems of Bell (1964) and Leggett (2003). Also, C&R have used their claim in attempt to prove that a system's quantum-mechanical wave function is in a one-to-one correspondence with its 'ontic' state (Colbeck & Renner, 2012a). C&R's claim essentially means that in any hidden variable theory that is compatible with quantum-mechanical predictions, probabilities of measurement outcomes are independent of these hidden variables. This makes such variables otiose. On closer inspection, however, the generality and validity of the claim can be contested. First, it is based on an assumption called 'Freedom of Choice'. As the name suggests, this assumption involves the independence of an experimenter's choice of measurement settings. But in the way C&R define this assumption, a no-signalling condition is surreptitiously presupposed, making the assumption less innocent than it sounds. When using this definition, any hidden variable theory violating parameter independence, such as Bohmian Mechanics, is immediately shown to be incompatible with quantum-mechanical predictions. Also, the argument of C&R is hard to follow and their mathematical derivation contains several gaps, some of which cannot be closed in the way they suggest. We shall show that these gaps can be filled. The issue with the 'Freedom of Choice' assumption can be circumvented by explicitly assuming parameter independence. This makes the result less general, but better founded. We then obtain an impossibility theorem for hidden variable theories satisfying parameter independence only. As stated above, such hidden variable theories are impossible in the sense that any supplemental variables have no bearing on outcome probabilities

  19. Published diagnostic models safely excluded colorectal cancer in an independent primary care validation study

    NARCIS (Netherlands)

    Elias, Sjoerd G; Kok, Liselotte; Witteman, Ben J M; Goedhard, Jelle G; Romberg-Camps, Mariëlle J L; Muris, Jean W M; de Wit, Niek J; Moons, Karel G M

    OBJECTIVE: To validate published diagnostic models for their ability to safely reduce unnecessary endoscopy referrals in primary care patients suspected of significant colorectal disease. STUDY DESIGN AND SETTING: Following a systematic literature search, we independently validated the identified

  20. On the k-independence required by linear probing and minwise independence

    DEFF Research Database (Denmark)

    Pǎtraşcu, Mihai; Thorup, Mikkel

    2016-01-01

    We show that linear probing requires 5-independent hash functions for expected constant-time performance, matching an upper bound of Pagh et al. [2009]. More precisely, we construct a random 4-independent hash function yielding expected logarithmic search time for certain keys. For (1 + ε......)-approximate minwise independence, we show that Ω(lg1 ε)-independent hash functions are required, matching an upper bound of Indyk [2001]. We also show that the very fast 2-independent multiply-shift scheme of Dietzfelbinger [1996] fails badly in both applications....

  1. Intra-Gene DNA Methylation Variability Is a Clinically Independent Prognostic Marker in Women's Cancers.

    Science.gov (United States)

    Bartlett, Thomas E; Jones, Allison; Goode, Ellen L; Fridley, Brooke L; Cunningham, Julie M; Berns, Els M J J; Wik, Elisabeth; Salvesen, Helga B; Davidson, Ben; Trope, Claes G; Lambrechts, Sandrina; Vergote, Ignace; Widschwendter, Martin

    2015-01-01

    We introduce a novel per-gene measure of intra-gene DNA methylation variability (IGV) based on the Illumina Infinium HumanMethylation450 platform, which is prognostic independently of well-known predictors of clinical outcome. Using IGV, we derive a robust gene-panel prognostic signature for ovarian cancer (OC, n = 221), which validates in two independent data sets from Mayo Clinic (n = 198) and TCGA (n = 358), with significance of p = 0.004 in both sets. The OC prognostic signature gene-panel is comprised of four gene groups, which represent distinct biological processes. We show the IGV measurements of these gene groups are most likely a reflection of a mixture of intra-tumour heterogeneity and transcription factor (TF) binding/activity. IGV can be used to predict clinical outcome in patients individually, providing a surrogate read-out of hard-to-measure disease processes.

  2. Autocatalytic sets in a partitioned biochemical network.

    Science.gov (United States)

    Smith, Joshua I; Steel, Mike; Hordijk, Wim

    2014-01-01

    In previous work, RAF theory has been developed as a tool for making theoretical progress on the origin of life question, providing insight into the structure and occurrence of self-sustaining and collectively autocatalytic sets within catalytic polymer networks. We present here an extension in which there are two "independent" polymer sets, where catalysis occurs within and between the sets, but there are no reactions combining polymers from both sets. Such an extension reflects the interaction between nucleic acids and peptides observed in modern cells and proposed forms of early life. We present theoretical work and simulations which suggest that the occurrence of autocatalytic sets is robust to the partitioned structure of the network. We also show that autocatalytic sets remain likely even when the molecules in the system are not polymers, and a low level of inhibition is present. Finally, we present a kinetic extension which assigns a rate to each reaction in the system, and show that identifying autocatalytic sets within such a system is an NP-complete problem. Recent experimental work has challenged the necessity of an RNA world by suggesting that peptide-nucleic acid interactions occurred early in chemical evolution. The present work indicates that such a peptide-RNA world could support the spontaneous development of autocatalytic sets and is thus a feasible alternative worthy of investigation.

  3. The generalized scheme-independent Crewther relation in QCD

    Science.gov (United States)

    Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; Brodsky, Stanley J.

    2017-07-01

    The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton-nucleon scattering times the Adler function, defined from the cross section for electron-positron annihilation into hadrons, has no pQCD radiative corrections. The ;Generalized Crewther Relation; relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp) at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient CBjp-1 have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, αˆd (Q) =∑i≥1 αˆg1 i (Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Similar

  4. Using maximum topology matching to explore differences in species distribution models

    Science.gov (United States)

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  5. Optimal Testing Effort Control for Modular Software System Incorporating The Concept of Independent and Dependent Faults: A Control Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Kuldeep CHAUDHARY

    2012-07-01

    Full Text Available In this paper, we discuss modular software system for Software Reliability GrowthModels using testing effort and study the optimal testing effort intensity for each module. The maingoal is to minimize the cost of software development when budget constraint on testing expenditureis given. We discuss the evolution of faults removal dynamics in incorporating the idea of leading/independent and dependent faults in modular software system under the assumption that testing ofeach of the modulus is done independently. The problem is formulated as an optimal controlproblem and the solution to the proposed problem has been obtained by using Pontryagin MaximumPrinciple.

  6. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  7. Depression is an independent determinant of life satisfaction early after stroke.

    Science.gov (United States)

    Oosterveer, Daniëlla M; Mishre, Radha Rambaran; van Oort, Andrea; Bodde, Karin; Aerden, Leo A M

    2017-03-06

    Life satisfaction is reduced in stroke patients. However, as a rule, rehabilitation goals are not aimed at life satisfaction, but at activities and participation. In order to optimize life satisfaction in stroke patients, rehabilitation should take into account the determinants of life satisfaction. The aim of this study was therefore to determine what factors are independent determinants of life satisfaction in a large group of patients early after stroke. Stroke-surviving patients were examined by a specialized nurse 6 weeks after discharge from hospital or rehabilitation setting. A standardized history and several screening lists, including the Lisat-9, were completed. Step-wise regression was used to identify independent determinants of life satisfaction. A total of 284 stroke-surviving patients were included in the study. Of these, 117 answered all of the Lisat-9 questions. Most patients (66.5%) rated their life as a whole as "satisfying" or "very satisfying". More depressive symptoms were independently associated with lower life satisfaction (p life early after a stroke. The score on the Hospital Anxiety and Depression Scale depression items is independently associated with life satisfaction. Physicians should therefore pay close attention to the mood of these patients.

  8. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    Science.gov (United States)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  9. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  10. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  11. Optimal operating conditions for maximum biogas production in anaerobic bioreactors

    International Nuclear Information System (INIS)

    Balmant, W.; Oliveira, B.H.; Mitchell, D.A.; Vargas, J.V.C.; Ordonez, J.C.

    2014-01-01

    The objective of this paper is to demonstrate the existence of optimal residence time and substrate inlet mass flow rate for maximum methane production through numerical simulations performed with a general transient mathematical model of an anaerobic biodigester introduced in this study. It is herein suggested a simplified model with only the most important reaction steps which are carried out by a single type of microorganisms following Monod kinetics. The mathematical model was developed for a well mixed reactor (CSTR – Continuous Stirred-Tank Reactor), considering three main reaction steps: acidogenesis, with a μ max of 8.64 day −1 and a K S of 250 mg/L, acetogenesis, with a μ max of 2.64 day −1 and a K S of 32 mg/L, and methanogenesis, with a μ max of 1.392 day −1 and a K S of 100 mg/L. The yield coefficients were 0.1-g-dry-cells/g-pollymeric compound for acidogenesis, 0.1-g-dry-cells/g-propionic acid and 0.1-g-dry-cells/g-butyric acid for acetogenesis and 0.1 g-dry-cells/g-acetic acid for methanogenesis. The model describes both the transient and the steady-state regime for several different biodigester design and operating conditions. After model experimental validation, a parametric analysis was performed. It was found that biogas production is strongly dependent on the input polymeric substrate and fermentable monomer concentrations, but fairly independent of the input propionic, acetic and butyric acid concentrations. An optimisation study was then conducted and optimal residence time and substrate inlet mass flow rate were found for maximum methane production. The optima found were very sharp, showing a sudden drop of methane mass flow rate variation from the observed maximum to zero, within a 20% range around the optimal operating parameters, which stresses the importance of their identification, no matter how complex the actual bioreactor design may be. The model is therefore expected to be a useful tool for simulation, design, control and

  12. From “Smaller is Stronger” to “Size-Independent Strength Plateau”: Towards Measuring the Ideal Strength of Iron

    KAUST Repository

    Han, Wei-Zhong; Huang, Ling; Ogata, Shigenobu; Kimizuka, Hajime; Yang, Zhao-Chun; Weinberger, Christopher; Li, Qing-Jie; Liu, Bo-Yu; Zhang, Xixiang; Li, Ju; Ma, Evan; Shan, Zhi-Wei

    2015-01-01

    The trend from “smaller is stronger” to “size-independent strength plateau” is observed in the compression of spherical iron nanoparticles. When the diameter of iron nanospheres is less than a critical value, the maximum contact pressure saturates at 10.7 GPa, corresponding to a local shear stress of ≈9.4 GPa, which is comparable to the theoretical shear strength of iron.

  13. From “Smaller is Stronger” to “Size-Independent Strength Plateau”: Towards Measuring the Ideal Strength of Iron

    KAUST Repository

    Han, Wei-Zhong

    2015-04-17

    The trend from “smaller is stronger” to “size-independent strength plateau” is observed in the compression of spherical iron nanoparticles. When the diameter of iron nanospheres is less than a critical value, the maximum contact pressure saturates at 10.7 GPa, corresponding to a local shear stress of ≈9.4 GPa, which is comparable to the theoretical shear strength of iron.

  14. Maximum entropy reconstructions for crystallographic imaging; Cristallographie et reconstruction d`images par maximum d`entropie

    Energy Technology Data Exchange (ETDEWEB)

    Papoular, R

    1997-07-01

    The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.

  15. Ten common mistakes to avoid as an independent consultant.

    Science.gov (United States)

    Hau, M L

    1997-01-01

    1. Enthusiasm, dedication, and hard work will not guarantee success as an independent consultant. 2. Careful market research, selecting a venture that promises clients and profit, should be the basis for deciding to start a consultant business, not just enthusiasm and emotional drive. 3. Business management practices of developing a business plan, careful price setting, and managing cash flow are essential for business survival. 4. Legal considerations, including the form of the business, contracts, and essential recordkeeping, must not be overlooked.

  16. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  17. Comprehensive performance analyses and optimization of the irreversible thermodynamic cycle engines (TCE) under maximum power (MP) and maximum power density (MPD) conditions

    International Nuclear Information System (INIS)

    Gonca, Guven; Sahin, Bahri; Ust, Yasin; Parlak, Adnan

    2015-01-01

    This paper presents comprehensive performance analyses and comparisons for air-standard irreversible thermodynamic cycle engines (TCE) based on the power output, power density, thermal efficiency, maximum dimensionless power output (MP), maximum dimensionless power density (MPD) and maximum thermal efficiency (MEF) criteria. Internal irreversibility of the cycles occurred during the irreversible-adiabatic processes is considered by using isentropic efficiencies of compression and expansion processes. The performances of the cycles are obtained by using engine design parameters such as isentropic temperature ratio of the compression process, pressure ratio, stroke ratio, cut-off ratio, Miller cycle ratio, exhaust temperature ratio, cycle temperature ratio and cycle pressure ratio. The effects of engine design parameters on the maximum and optimal performances are investigated. - Highlights: • Performance analyses are conducted for irreversible thermodynamic cycle engines. • Comprehensive computations are performed. • Maximum and optimum performances of the engines are shown. • The effects of design parameters on performance and power density are examined. • The results obtained may be guidelines to the engine designers

  18. Maximum mouth opening and trismus in 143 patients treated for oral cancer: a 1-year prospective study.

    Science.gov (United States)

    Wetzels, Jan-Willem G H; Merkx, Matthias A W; de Haan, Anton F J; Koole, Ron; Speksnijder, Caroline M

    2014-12-01

    Patients with oral cancer can develop restricted mouth opening (trismus) because of the oncologic treatment. Maximum mouth opening (MMO) was measured in 143 patients shortly before treatment and 0, 6, and 12 months posttreatment, and the results were analyzed using a linear mixed-effects model. In every patient, MMO decreased after treatment. The patients who underwent surgery, recovered partially by 6 and 12 months after treatment, whereas the patients who received both surgery and radiotherapy or primary radiotherapy did not recover. Tumor location, tumor size, and alcohol consumption had independent effects on MMO. Having trismus (MMO oral cancer treatment. © 2014 Wiley Periodicals, Inc.

  19. Maximum Likelihood PSD Estimation for Speech Enhancement in Reverberation and Noise

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Jensen, Søren Holdt

    2016-01-01

    In this contribution we focus on the problem of power spectral density (PSD) estimation from multiple microphone signals in reverberant and noisy environments. The PSD estimation method proposed in this paper is based on the maximum likelihood (ML) methodology. In particular, we derive a novel ML...... instrumental measures and is shown to be higher than when the competing estimator is used. Moreover, we perform a speech intelligibility test where we demonstrate that both the proposed and the competing PSD estimators lead to similar intelligibility improvements......., it is shown numerically that the mean squared estimation error achieved by the proposed method is near the limit set by the corresponding Cram´er-Rao lower bound. The speech dereverberation performance of a multi-channel Wiener filter (MWF) based on the proposed PSD estimators is measured using several...

  20. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  1. The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation

    Science.gov (United States)

    Felder, Guido; Zischg, Andreas; Weingartner, Rolf

    2017-07-01

    Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.

  2. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  3. Wide frequency independently controlled dual-band inkjet-printed antenna

    KAUST Repository

    AbuTarboush, Hattan F.

    2014-01-08

    A low-cost inkjet-printed multiband monopole antenna is presented. The unique advantage of the proposed antenna is the freedom to adjust and set the dual-band of the antenna independently over a wide range (148.83%). To demonstrate the independent control feature, the 2.4 and 3.4 GHz bands for the wireless local area network (WLAN) and worldwide interoperability for microwave access (WiMAX) applications are selected as an example. The measured impedance bandwidths for the 2.4 and 3.4 GHz are 15.2 and 23.7%, respectively. These dual-bands have the ability to be controlled independently between 1.1 and 7.5 GHz without affecting the other band. In addition, the proposed antenna can be assigned for different mobile and wireless applications such as GPS, PCS, GSM 1800, 1900, UMTS, and up to 5-GHz WLAN and WiMAX applications. The mechanism of independent control of each radiator through dimensional variation is discussed in detail. The antenna has a compact size of 10 × 37.3 × 0.44 mm3, leaving enough space for the driving electronics on the paper substrate. The measured results from the prototype are in good agreement with the simulated results. Owing to inkjet printing on an ordinary paper, the design is extremely light weight and highly suitable for low cost and large volume manufacturing. © The Institution of Engineering and Technology 2013.

  4. Algebraic Thinking in Solving Linier Program at High School Level: Female Student’s Field Independent Cognitive Style

    Science.gov (United States)

    Hardiani, N.; Budayasa, I. K.; Juniati, D.

    2018-01-01

    The aim of this study was to describe algebraic thinking of high school female student’s field independent cognitive style in solving linier program problem by revealing deeply the female students’ responses. Subjects in this study were 7 female students having field independent cognitive style in class 11. The type of this research was descriptive qualitative. The method of data collection used was observation, documentation, and interview. Data analysis technique was by reduction, presentation, and conclusion. The results of this study showed that the female students with field independent cognitive style in solving the linier program problem had the ability to represent algebraic ideas from the narrative question that had been read by manipulating symbols and variables presented in tabular form, creating and building mathematical models in two variables linear inequality system which represented algebraic ideas, and interpreting the solutions as variables obtained from the point of intersection in the solution area to obtain maximum benefit.

  5. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  6. Efficient generation of connectivity in neuronal networks from simulator-independent descriptions

    Directory of Open Access Journals (Sweden)

    Mikael eDjurfeldt

    2014-04-01

    Full Text Available Simulator-independent descriptions of connectivity in neuronal networks promise greater ease of model sharing, improved reproducibility of simulation results, and reduced programming effort for computational neuroscientists. However, until now, enabling the use of such descriptions in a given simulator in a computationally efficient way has entailed considerable work for simulator developers, which must be repeated for each new connectivity-generating library that is developed.We have developed a generic connection generator interface that provides a standard way to connect a connectivity-generating library to a simulator, such that one library can easily be replaced by another, according to the modeller's needs. We have used the connection generator interface to connect C++ and Python implementations of the connection-set algebra to the NEST simulator. We also demonstrate how the simulator-independent modelling framework PyNN can transparently take advantage of this, passing a connection description through to the simulator layer for rapid processing in C++ where a simulator supports the connection generator interface and falling-back to slower iteration in Python otherwise. A set of benchmarks demonstrates the good performance of the interface.

  7. Model-independent particle accelerator tuning

    Directory of Open Access Journals (Sweden)

    Alexander Scheinker

    2013-10-01

    Full Text Available We present a new model-independent dynamic feedback technique, rotation rate tuning, for automatically and simultaneously tuning coupled components of uncertain, complex systems. The main advantages of the method are: (1 it has the ability to handle unknown, time-varying systems, (2 it gives known bounds on parameter update rates, (3 we give an analytic proof of its convergence and its stability, and (4 it has a simple digital implementation through a control system such as the experimental physics and industrial control system (EPICS. Because this technique is model independent it may be useful as a real-time, in-hardware, feedback-based optimization scheme for uncertain and time-varying systems. In particular, it is robust enough to handle uncertainty due to coupling, thermal cycling, misalignments, and manufacturing imperfections. As a result, it may be used as a fine-tuning supplement for existing accelerator tuning/control schemes. We present multiparticle simulation results demonstrating the scheme’s ability to simultaneously adaptively adjust the set points of 22 quadrupole magnets and two rf buncher cavities in the Los Alamos Neutron Science Center (LANSCE Linear Accelerator’s transport region, while the beam properties and rf phase shift are continuously varying. The tuning is based only on beam current readings, without knowledge of particle dynamics. We also present an outline of how to implement this general scheme in software for optimization, and in hardware for feedback-based control/tuning, for a wide range of systems.

  8. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...

  9. Behavioral Analytic Approach to Placement of Patients in Community Settings.

    Science.gov (United States)

    Glickman, Henry S.; And Others

    Twenty adult psychiatric outpatients were assessed by their primary therapists on the Current Behavior Inventory prior to placing them in community settings. The diagnoses included schizophrenia, major affective disorder, dysthymic disorder, and atypical paranoid disorder. The inventory assessed behaviors in four areas: independent community…

  10. Adaptive tools in virtual environments: Independent component analysis for multimedia

    DEFF Research Database (Denmark)

    Kolenda, Thomas

    2002-01-01

    The thesis investigates the role of independent component analysis in the setting of virtual environments, with the purpose of finding properties that reflect human context. A general framework for performing unsupervised classification with ICA is presented in extension to the latent semantic in...... were compared to investigate computational differences and separation results. The ICA properties were finally implemented in a chat room analysis tool and briefly investigated for visualization of search engines results....

  11. Combined analysis of steady state and transient transport by the maximum entropy method

    Energy Technology Data Exchange (ETDEWEB)

    Giannone, L.; Stroth, U; Koellermeyer, J [Association Euratom-Max-Planck-Institut fuer Plasmaphysik, Garching (Germany); and others

    1996-04-01

    A new maximum entropy approach has been applied to analyse three types of transient transport experiments. For sawtooth propagation experiments in the ASDEX Upgrade and ECRH power modulation and power-switching experiments in the Wendelstein 7-AS Stellarator, either the time evolution of the temperature perturbation or the phase and amplitude of the modulated temperature perturbation are used as non-linear constraints to the {chi}{sub e} profile to be fitted. Simultaneously, the constraints given by the equilibrium temperature profile for steady-state power balance are fitted. In the maximum entropy formulation, the flattest {chi}{sub e} profile consistent with the constraints is found. It was found that {chi}{sub e} determined from sawtooth propagation was greater than the power balance value by a factor of five in the ASDEX Upgrade. From power modulation experiments, employing the measurements of four modulation frequencies simultaneously, the power deposition profile as well as the {chi}{sub e} profile could be determined. A comparison of the predictions of a time-independent {chi}{sub e} model and a power-dependent {chi}{sub e} model is made. The power-switching experiments show that the {chi}{sub e} profile must change within a millisecond to a new value consistent with the power balance value at the new input power. Neither power deposition broadening due to suprathermal electrons nor temperature or gradient dependences of {chi}{sub e} can explain this observation. (author).

  12. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    Science.gov (United States)

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  13. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio

    2015-11-10

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.

  14. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  15. Quantitative Maximum Shear-Wave Stiffness of Breast Masses as a Predictor of Histopathologic Severity.

    Science.gov (United States)

    Berg, Wendie A; Mendelson, Ellen B; Cosgrove, David O; Doré, Caroline J; Gay, Joel; Henry, Jean-Pierre; Cohen-Bacrie, Claude

    2015-08-01

    The objective of our study was to compare quantitative maximum breast mass stiffness on shear-wave elastography (SWE) with histopathologic outcome. From September 2008 through September 2010, at 16 centers in the United States and Europe, 1647 women with a sonographically visible breast mass consented to undergo quantitative SWE in this prospective protocol; 1562 masses in 1562 women had an acceptable reference standard. The quantitative maximum stiffness (termed "Emax") on three acquisitions was recorded for each mass with the range set from 0 (very soft) to 180 kPa (very stiff). The median Emax and interquartile ranges (IQRs) were determined as a function of histopathologic diagnosis and were compared using the Mann-Whitney U test. We considered the impact of mass size on maximum stiffness by performing the same comparisons for masses 9 mm or smaller and those larger than 9 mm in diameter. The median patient age was 50 years (mean, 51.8 years; SD, 14.5 years; range, 21-94 years), and the median lesion diameter was 12 mm (mean, 14 mm; SD, 7.9 mm; range, 1-53 mm). The median Emax of the 1562 masses (32.1% malignant) was 71 kPa (mean, 90 kPa; SD, 65 kPa; IQR, 31-170 kPa). Of 502 malignancies, 23 (4.6%) ductal carcinoma in situ (DCIS) masses had a median Emax of 126 kPa (IQR, 71-180 kPa) and were less stiff than 468 invasive carcinomas (median Emax, 180 kPa [IQR, 138-180 kPa]; p = 0.002). Benign lesions were much softer than malignancies (median Emax, 43 kPa [IQR, 24-83 kPa] vs 180 kPa [IQR, 129-180 kPa]; p masses. Despite overlap in Emax values, maximum stiffness measured by SWE is a highly effective predictor of the histopathologic severity of sonographically depicted breast masses.

  16. PRKCA and multiple sclerosis: association in two independent populations.

    Directory of Open Access Journals (Sweden)

    Janna Saarela

    2006-03-01

    Full Text Available Multiple sclerosis (MS is a chronic disease of the central nervous system responsible for a large portion of neurological disabilities in young adults. Similar to what occurs in numerous complex diseases, both unknown environmental factors and genetic predisposition are required to generate MS. We ascertained a set of 63 Finnish MS families, originating from a high-risk region of the country, to identify a susceptibility gene within the previously established 3.4-Mb region on 17q24. Initial single nucleotide polymorphism (SNP-based association implicated PRKCA (protein kinase C alpha gene, and this association was replicated in an independent set of 148 Finnish MS families (p = 0.0004; remaining significant after correction for multiple testing. Further, a dense set of 211 SNPs evenly covering the PRKCA gene and the flanking regions was selected from the dbSNP database and analyzed in two large, independent MS cohorts: in 211 Finnish and 554 Canadian MS families. A multipoint SNP analysis indicated linkage to PRKCA and its telomeric flanking region in both populations, and SNP haplotype and genotype combination analyses revealed an allelic variant of PRKCA, which covers the region between introns 3 and 8, to be over-represented in Finnish MS cases (odds ratio = 1.34, 95% confidence interval 1.07-1.68. A second allelic variant, covering the same region of the PRKCA gene, showed somewhat stronger evidence for association in the Canadian families (odds ratio = 1.64, 95% confidence interval 1.39-1.94. Initial functional relevance for disease predisposition was suggested by the expression analysis: The transcript levels of PRKCA showed correlation with the copy number of the Finnish and Canadian "risk" haplotypes in CD4-negative mononuclear cells of five Finnish multiplex families and in lymphoblast cell lines of 11 Centre d'Etude du Polymorphisme Humain (CEPH individuals of European origin.

  17. 40 CFR 141.63 - Maximum contaminant levels (MCLs) for microbiological contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels (MCLs) for microbiological contaminants. 141.63 Section 141.63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Water Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.63 Maximum...

  18. Intra-Gene DNA Methylation Variability Is a Clinically Independent Prognostic Marker in Women's Cancers.

    Directory of Open Access Journals (Sweden)

    Thomas E Bartlett

    Full Text Available We introduce a novel per-gene measure of intra-gene DNA methylation variability (IGV based on the Illumina Infinium HumanMethylation450 platform, which is prognostic independently of well-known predictors of clinical outcome. Using IGV, we derive a robust gene-panel prognostic signature for ovarian cancer (OC, n = 221, which validates in two independent data sets from Mayo Clinic (n = 198 and TCGA (n = 358, with significance of p = 0.004 in both sets. The OC prognostic signature gene-panel is comprised of four gene groups, which represent distinct biological processes. We show the IGV measurements of these gene groups are most likely a reflection of a mixture of intra-tumour heterogeneity and transcription factor (TF binding/activity. IGV can be used to predict clinical outcome in patients individually, providing a surrogate read-out of hard-to-measure disease processes.

  19. Towards a culturally independent participatory design method: Fusing game elements into the design process

    DEFF Research Database (Denmark)

    Jensen, Mika Yasuoka; Nakatani, Momoko; Ohno, Takehiko

    2013-01-01

    Historically, Participatory Design (PD) was introduced and applied in the Scandinavian and American context as a practical design method for collective creativity and stakeholder involvement. In this paper, by fusing game elements into PD, we suggest a first step towards a culturally independent ...... imply that the introduction of game elements allows PD to be effectively utilized in culturally diverse settings.......Historically, Participatory Design (PD) was introduced and applied in the Scandinavian and American context as a practical design method for collective creativity and stakeholder involvement. In this paper, by fusing game elements into PD, we suggest a first step towards a culturally independent PD...... method called the ICT Service Design Game to ease the prevailing concern that PD has limited applicability in other cultural settings. We conduct four experiments on ICT Service Design Game in Scandinavia and Asia to evaluate its feasibility. The experiments identify some differences in the PD process...

  20. Evidence for a maximum mass cut-off in the neutron star mass distribution and constraints on the equation of state

    Science.gov (United States)

    Alsing, Justin; Silva, Hector O.; Berti, Emanuele

    2018-04-01

    We infer the mass distribution of neutron stars in binary systems using a flexible Gaussian mixture model and use Bayesian model selection to explore evidence for multi-modality and a sharp cut-off in the mass distribution. We find overwhelming evidence for a bimodal distribution, in agreement with previous literature, and report for the first time positive evidence for a sharp cut-off at a maximum neutron star mass. We measure the maximum mass to be 2.0M⊙ sharp cut-off is interpreted as the maximum stable neutron star mass allowed by the equation of state of dense matter, our measurement puts constraints on the equation of state. For a set of realistic equations of state that support >2M⊙ neutron stars, our inference of mmax is able to distinguish between models at odds ratios of up to 12: 1, whilst under a flexible piecewise polytropic equation of state model our maximum mass measurement improves constraints on the pressure at 3 - 7 × the nuclear saturation density by ˜30 - 50% compared to simply requiring mmax > 2M⊙. We obtain a lower bound on the maximum sound speed attained inside the neutron star of c_s^max > 0.63c (99.8%), ruling out c_s^max c/√{3} at high significance. Our constraints on the maximum neutron star mass strengthen the case for neutron star-neutron star mergers as the primary source of short gamma-ray bursts.

  1. Verification of maximum radial power peaking factor due to insertion of FPM-LEU target in the core of RSG-GAS reactor

    Energy Technology Data Exchange (ETDEWEB)

    Setyawan, Daddy, E-mail: d.setyawan@bapeten.go.id [Center for Assessment of Regulatory System and Technology for Nuclear Installations and Materials, Indonesian Nuclear Energy Regulatory Agency (BAPETEN), Jl. Gajah Mada No. 8 Jakarta 10120 (Indonesia); Rohman, Budi [Licensing Directorate for Nuclear Installations and Materials, Indonesian Nuclear Energy Regulatory Agency (BAPETEN), Jl. Gajah Mada No. 8 Jakarta 10120 (Indonesia)

    2014-09-30

    Verification of Maximum Radial Power Peaking Factor due to insertion of FPM-LEU target in the core of RSG-GAS Reactor. Radial Power Peaking Factor in RSG-GAS Reactor is a very important parameter for the safety of RSG-GAS reactor during operation. Data of radial power peaking factor due to the insertion of Fission Product Molybdenum with Low Enriched Uranium (FPM-LEU) was reported by PRSG to BAPETEN through the Safety Analysis Report RSG-GAS for FPM-LEU target irradiation. In order to support the evaluation of the Safety Analysis Report incorporated in the submission, the assessment unit of BAPETEN is carrying out independent assessment in order to verify safety related parameters in the SAR including neutronic aspect. The work includes verification to the maximum radial power peaking factor change due to the insertion of FPM-LEU target in RSG-GAS Reactor by computational method using MCNP5and ORIGEN2. From the results of calculations, the new maximum value of the radial power peaking factor due to the insertion of FPM-LEU target is 1.27. The results of calculations in this study showed a smaller value than 1.4 the limit allowed in the SAR.

  2. Long-term independent brain-computer interface home use improves quality of life of a patient in the locked-in state: a case study.

    Science.gov (United States)

    Holz, Elisa Mira; Botrel, Loic; Kaufmann, Tobias; Kübler, Andrea

    2015-03-01

    Despite intense brain-computer interface (BCI) research for >2 decades, BCIs have hardly been established at patients' homes. The current study aimed at demonstrating expert independent BCI home use by a patient in the locked-in state and the effect it has on quality of life. In this case study, the P300 BCI-controlled application Brain Painting was facilitated and installed at the patient's home. Family and caregivers were trained in setting up the BCI system. After every BCI session, the end user indicated subjective level of control, loss of control, level of exhaustion, satisfaction, frustration, and enjoyment. To monitor BCI home use, evaluation data of every session were automatically sent and stored on a remote server. Satisfaction with the BCI as an assistive device and subjective workload was indicated by the patient. In accordance with the user-centered design, usability of the BCI was evaluated in terms of its effectiveness, efficiency, and satisfaction. The influence of the BCI on quality of life of the end user was assessed. At the patient's home. A 73-year-old patient with amyotrophic lateral sclerosis in the locked-in state. Not applicable. The BCI has been used by the patient independent of experts for >14 months. The patient painted in about 200 BCI sessions (1-3 times per week) with a mean painting duration of 81.86 minutes (SD=52.15, maximum: 230.41). BCI improved quality of life of the patient. In most of the BCI sessions the end user's satisfaction was high (mean=7.4, SD=3.24; range, 0-10). Dissatisfaction occurred mostly because of technical problems at the beginning of the study or varying BCI control. The subjective workload was moderate (mean=40.61; range, 0-100). The end user was highy satisfied with all components of the BCI (mean 4.42-5.0; range, 1-5). A perfect match between the user and the BCI technology was achieved (mean: 4.8; range, 1-5). Brain Painting had a positive impact on the patient's life on all three dimensions: competence

  3. Estimating the energy independence of a municipal wastewater treatment plant incorporating green energy resources

    International Nuclear Information System (INIS)

    Chae, Kyu-Jung; Kang, Jihoon

    2013-01-01

    Highlights: • We estimated green energy production in a municipal wastewater treatment plant. • Engineered approaches in mining multiple green energy resources were presented. • The estimated green energy production accounted for 6.5% of energy independence in the plant. • We presented practical information regarding green energy projects in water infrastructures. - Abstract: Increasing energy prices and concerns about global climate change highlight the need to improve energy independence in municipal wastewater treatment plants (WWTPs). This paper presents methodologies for estimating the energy independence of a municipal WWTP with a design capacity of 30,000 m 3 /d incorporating various green energy resources into the existing facilities, including different types of 100 kW photovoltaics, 10 kW small hydropower, and an effluent heat recovery system with a 25 refrigeration ton heat pump. It also provides guidance for the selection of appropriate renewable technologies or their combinations for specific WWTP applications to reach energy self-sufficiency goals. The results showed that annual energy production equal to 107 tons of oil equivalent could be expected when the proposed green energy resources are implemented in the WWTP. The energy independence, which was defined as the percent ratio of green energy production to energy consumption, was estimated to be a maximum of 6.5% and to vary with on-site energy consumption in the WWTP. Implementing green energy resources tailored to specific site conditions is necessary to improve the energy independence in WWTPs. Most of the applied technologies were economically viable primarily because of the financial support under the mandatory renewable portfolio standard in Korea

  4. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  5. 40 CFR 1045.140 - What is my engine's maximum engine power?

    Science.gov (United States)

    2010-07-01

    ...) Maximum engine power for an engine family is generally the weighted average value of maximum engine power... engine family's maximum engine power apply in the following circumstances: (1) For outboard or personal... value for maximum engine power from all the different configurations within the engine family to...

  6. New bound on MIS and MIN-CDS for a unit ball graph

    Directory of Open Access Journals (Sweden)

    D.A. Mojdeh

    2017-09-01

    Full Text Available The size of the maximum independent set (MIS in a graph G is called the independence number. The size of the minimum connected dominating set (MIN-CDS in G is called the connected domination number. The aim of this paper is to determine two better upper bounds of the independence number; dependent on the connected domination number for a unit ball graph. Further, we improve the upper bound to obtain the best bound with respect to the upper bounds obtained thus far.

  7. On the independent points in the sky for the search of periodic gravitational wave

    International Nuclear Information System (INIS)

    Sahay, S.K.

    2009-01-01

    In the search of the periodic gravitational wave we investigate independent points in the sky assuming the noise power spectral density to be flat. We have made an analysis with different initial azimuths of the Earth for a week data set. The analysis shows significant difference in the independent points in the sky under search. We numerically obtain an approximate relation to make a trade-off between computational cost and sensitivities. We also discuss the feasibility of the coherent search in small frequency band in reference to advanced LIGO. (authors)

  8. Media independence and dividend policy

    DEFF Research Database (Denmark)

    Farooq, Omar; Dandoune, Salma

    2012-01-01

    independence and dividend policies in emerging markets. Using a dataset from twenty three emerging markets, we show a significantly negative relationship between dividend policies (payout ratio and decision to pay dividend) and media independence. We argue that independent media reduces information asymmetries...... for stock market participants. Consequently, stock market participants in emerging markets with more independent media do not demand as high and as much dividends as their counterparts in emerging markets with less independent media. We also show that press independence is more important in defining......Can media pressurize managers to disgorge excess cash to shareholders? Do firms in countries with more independent media follow different dividend policies than firms with less independent media? This paper seeks to answer these questions and aims to document the relationship between media...

  9. Increased Set Shifting Costs in Fasted Healthy Volunteers

    Science.gov (United States)

    Bolton, Heather M.; Burgess, Paul W.; Gilbert, Sam J.; Serpell, Lucy

    2014-01-01

    We investigated the impact of temporary food restriction on a set shifting task requiring participants to judge clusters of pictures against a frequently changing rule. 60 healthy female participants underwent two testing sessions: once after fasting for 16 hours and once in a satiated state. Participants also completed a battery of questionnaires (Hospital Anxiety and Depression Scale [HADS]; Persistence, Perseveration and Perfectionism Questionnaire [PPPQ-22]; and Eating Disorders Examination Questionnaire [EDE-Q6]). Set shifting costs were significantly increased after fasting; this effect was independent of self-reported mood and perseveration. Furthermore, higher levels of weight concern predicted a general performance decrement under conditions of fasting. We conclude that relatively short periods of fasting can lead to set shifting impairments. This finding may have relevance to studies of development, individual differences, and the interpretation of psychometric tests. It also could have implications for understanding the etiology and maintenance of eating disorders, in which impaired set shifting has been implicated. PMID:25025179

  10. Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography

    International Nuclear Information System (INIS)

    Brendel, Bernhard; Teuffenbach, Maximilian von; Noël, Peter B.; Pfeiffer, Franz; Koehler, Thomas

    2016-01-01

    Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penalty comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts

  11. 42 CFR 409.62 - Lifetime maximum on inpatient psychiatric care.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Lifetime maximum on inpatient psychiatric care. 409....62 Lifetime maximum on inpatient psychiatric care. There is a lifetime maximum of 190 days on inpatient psychiatric hospital services available to any beneficiary. Therefore, once an individual receives...

  12. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  13. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    Science.gov (United States)

    van Dam, Wim; Howard, Mark

    2011-07-01

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiołkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.

  14. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    International Nuclear Information System (INIS)

    Dam, Wim van; Howard, Mark

    2011-01-01

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiolkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.

  15. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  16. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  17. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  18. Cumulative hierarchies and computability over universes of sets

    Directory of Open Access Journals (Sweden)

    Domenico Cantone

    2008-05-01

    Full Text Available Various metamathematical investigations, beginning with Fraenkel’s historical proof of the independence of the axiom of choice, called for suitable definitions of hierarchical universes of sets. This led to the discovery of such important cumulative structures as the one singled out by von Neumann (generally taken as the universe of all sets and Godel’s universe of the so-called constructibles. Variants of those are exploited occasionally in studies concerning the foundations of analysis (according to Abraham Robinson’s approach, or concerning non-well-founded sets. We hence offer a systematic presentation of these many structures, partly motivated by their relevance and pervasiveness in mathematics. As we report, numerous properties of hierarchy-related notions such as rank, have been verified with the assistance of the ÆtnaNova proof-checker.Through SETL and Maple implementations of procedures which effectively handle the Ackermann’s hereditarily finite sets, we illustrate a particularly significant case among those in which the entities which form a universe of sets can be algorithmically constructed and manipulated; hereby, the fruitful bearing on pure mathematics of cumulative set hierarchies ramifies into the realms of theoretical computer science and algorithmics.

  19. Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory

    Science.gov (United States)

    Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.

    1990-01-01

    New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.

  20. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.