WorldWideScience

Sample records for point sets minimizing

  1. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  2. Geometric fit of a point set by generalized circles

    DEFF Research Database (Denmark)

    Körner, Mark-Christopher; Brimberg, Jack; Juel, Henrik

    2010-01-01

    In our paper we approximate a set of given points by a general circle. More precisely, given two norms k 1 and k 2 and a set of points in the plane, we consider the problem of locating and scaling the unit circle of norm k 1 such that the sum of weighted distances between the circumference...... of the circle and the given points is minimized, where the distance is measured by a norm k 2. We present results for the general case. In the case that k 1 and k 2 are both polyhedral norms, we are able to solve the problem by investigating a finite candidate set....

  3. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  4. IMPORTANCE, Minimal Cut Sets and System Availability from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Lambert, H. W.

    1987-01-01

    1 - Description of problem or function: IMPORTANCE computes various measures of probabilistic importance of basic events and minimal cut sets to a fault tree or reliability network diagram. The minimal cut sets, the failure rates and the fault duration times (i.e., the repair times) of all basic events contained in the minimal cut sets are supplied as input data. The failure and repair distributions are assumed to be exponential. IMPORTANCE, a quantitative evaluation code, then determines the probability of the top event and computes the importance of minimal cut sets and basic events by a numerical ranking. Two measures are computed. The first describes system behavior at one point in time; the second describes sequences of failures that cause the system to fail in time. All measures are computed assuming statistical independence of basic events. In addition, system unavailability and expected number of system failures are computed by the code. 2 - Method of solution: Seven measures of basic event importance and two measures of cut set importance can be computed. Birnbaum's measure of importance (i.e., the partial derivative) and the probability of the top event are computed using the min cut upper bound. If there are no replicated events in the minimal cut sets, then the min cut upper bound is exact. If basic events are replicated in the minimal cut sets, then based on experience the min cut upper bound is accurate if the probability of the top event is less than 0.1. Simpson's rule is used in computing the time-integrated measures of importance. Newton's method for approximating the roots of an equation is employed in the options where the importance measures are computed as a function of the probability of the top event, and a shell sort puts the output in descending order of importance

  5. Abelian groups with a minimal generating set | Ruzicka ...

    African Journals Online (AJOL)

    We study the existence of minimal generating sets in Abelian groups. We prove that Abelian groups with minimal generating sets are not closed under quotients, nor under subgroups, nor under infinite products. We give necessary and sufficient conditions for existence of a minimal generating set providing that the Abelian ...

  6. Knee point search using cascading top-k sorting with minimized time complexity.

    Science.gov (United States)

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  7. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  8. Algorithm for finding minimal cut sets in a fault tree

    International Nuclear Information System (INIS)

    Rosenberg, Ladislav

    1996-01-01

    This paper presents several algorithms that have been used in a computer code for fault-tree analysing by the minimal cut sets method. The main algorithm is the more efficient version of the new CARA algorithm, which finds minimal cut sets with an auxiliary dynamical structure. The presented algorithm for finding the minimal cut sets enables one to do so by defined requirements - according to the order of minimal cut sets, or to the number of minimal cut sets, or both. This algorithm is from three to six times faster when compared with the primary version of the CARA algorithm

  9. Towards a minimal generic set of domains of functioning and health.

    Science.gov (United States)

    Cieza, Alarcos; Oberhauser, Cornelia; Bickenbach, Jerome; Chatterji, Somnath; Stucki, Gerold

    2014-03-03

    The World Health Organization (WHO) has argued that functioning, and, more concretely, functioning domains constitute the operationalization that best captures our intuitive notion of health. Functioning is, therefore, a major public-health goal. A great deal of data about functioning is already available. Nonetheless, it is not possible to compare and optimally utilize this information. One potential approach to address this challenge is to propose a generic and minimal set of functioning domains that captures the experience of individuals and populations with respect to functioning and health. The objective of this investigation was to identify a minimal generic set of ICF domains suitable for describing functioning in adults at both the individual and population levels. We performed a psychometric study using data from: 1) the German National Health Interview and Examination Survey 1998, 2) the United States National Health and Nutrition Examination Survey 2007/2008, and 3) the ICF Core Set studies. Random Forests and Group Lasso regression were applied using one self-reported general-health question as a dependent variable. The domains selected were compared to those of the World Health Survey (WHS) developed by the WHO. Seven domains of the International Classification of Functioning, Disability and Health (ICF) are proposed as a minimal generic set of functioning and health: energy and drive functions, emotional functions, sensation of pain, carrying out daily routine, walking, moving around, and remunerative employment. The WHS domains of self-care, cognition, interpersonal activities, and vision were not included in our selection. The minimal generic set proposed in this study is the starting point to address one of the most important challenges in health measurement--the comparability of data across studies and countries. It also represents the first step in developing a common metric of health to link information from the general population to information

  10. Enumeration of minimal stoichiometric precursor sets in metabolic networks.

    Science.gov (United States)

    Andrade, Ricardo; Wannagat, Martin; Klein, Cecilia C; Acuña, Vicente; Marchetti-Spaccamela, Alberto; Milreu, Paulo V; Stougie, Leen; Sagot, Marie-France

    2016-01-01

    What an organism needs at least from its environment to produce a set of metabolites, e.g. target(s) of interest and/or biomass, has been called a minimal precursor set. Early approaches to enumerate all minimal precursor sets took into account only the topology of the metabolic network (topological precursor sets). Due to cycles and the stoichiometric values of the reactions, it is often not possible to produce the target(s) from a topological precursor set in the sense that there is no feasible flux. Although considering the stoichiometry makes the problem harder, it enables to obtain biologically reasonable precursor sets that we call stoichiometric. Recently a method to enumerate all minimal stoichiometric precursor sets was proposed in the literature. The relationship between topological and stoichiometric precursor sets had however not yet been studied. Such relationship between topological and stoichiometric precursor sets is highlighted. We also present two algorithms that enumerate all minimal stoichiometric precursor sets. The first one is of theoretical interest only and is based on the above mentioned relationship. The second approach solves a series of mixed integer linear programming problems. We compared the computed minimal precursor sets to experimentally obtained growth media of several Escherichia coli strains using genome-scale metabolic networks. The results show that the second approach efficiently enumerates minimal precursor sets taking stoichiometry into account, and allows for broad in silico studies of strains or species interactions that may help to understand e.g. pathotype and niche-specific metabolic capabilities. sasita is written in Java, uses cplex as LP solver and can be downloaded together with all networks and input files used in this paper at http://www.sasita.gforge.inria.fr.

  11. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  12. Triple Hierarchical Variational Inequalities with Constraints of Mixed Equilibria, Variational Inequalities, Convex Minimization, and Hierarchical Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We introduce and analyze a hybrid iterative algorithm by virtue of Korpelevich's extragradient method, viscosity approximation method, hybrid steepest-descent method, and averaged mapping approach to the gradient-projection algorithm. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs, the solution set of finitely many variational inequality problems (VIPs, the solution set of general system of variational inequalities (GSVI, and the set of minimizers of convex minimization problem (CMP, which is just a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm to solve a hierarchical fixed point problem with constraints of finitely many GMEPs, finitely many VIPs, GSVI, and CMP. The results obtained in this paper improve and extend the corresponding results announced by many others.

  13. Minimal generating sets of groups, rings, and fields | Halbeisen ...

    African Journals Online (AJOL)

    A subset X of a group (or a ring, or a field) is called generating, if the smallest subgroup (or subring, or subfield) containing X is the group (ring, field) itself. A generating set X is called minimal generating, if X does not properly contain any generating set. The existence and cardinalities of minimal generating sets of various ...

  14. Minimal cut-set methodology for artificial intelligence applications

    International Nuclear Information System (INIS)

    Weisbin, C.R.; de Saussure, G.; Barhen, J.; Oblow, E.M.; White, J.C.

    1984-01-01

    This paper reviews minimal cut-set theory and illustrates its application with an example. The minimal cut-set approach uses disjunctive normal form in Boolean algebra and various Boolean operators to simplify very complicated tree structures composed of AND/OR gates. The simplification process is automated and performed off-line using existing computer codes to implement the Boolean reduction on the finite, but large tree structure. With this approach, on-line expert diagnostic systems whose response time is critical, could determine directly whether a goal is achievable by comparing the actual system state to a concisely stored set of preprocessed critical state elements

  15. KCUT, code to generate minimal cut sets for fault trees

    International Nuclear Information System (INIS)

    Han, Sang Hoon

    2008-01-01

    1 - Description of program or function: KCUT is a software to generate minimal cut sets for fault trees. 2 - Methods: Expand a fault tree into cut sets and delete non minimal cut sets. 3 - Restrictions on the complexity of the problem: Size and complexity of the fault tree

  16. Implementation of Steiner point of fuzzy set.

    Science.gov (United States)

    Liang, Jiuzhen; Wang, Dejiang

    2014-01-01

    This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.

  17. Obtaining a minimal set of rewrite rules

    CSIR Research Space (South Africa)

    Davel, M

    2005-11-01

    Full Text Available In this paper the authors describe a new approach to rewrite rule extraction and analysis, using Minimal Representation Graphs. This approach provides a mechanism for obtaining the smallest possible rule set – within a context-dependent rewrite rule...

  18. Constructal entransy dissipation minimization for 'volume-point' heat conduction

    International Nuclear Information System (INIS)

    Chen Lingen; Wei Shuhuan; Sun Fengrui

    2008-01-01

    The 'volume to point' heat conduction problem, which can be described as to how to determine the optimal distribution of high conductivity material through the given volume such that the heat generated at every point is transferred most effectively to its boundary, has became the focus of attention in the current constructal theory literature. In general, the minimization of the maximum temperature difference in the volume is taken as the optimization objective. A new physical quantity, entransy, has been identified as a basis for optimizing heat transfer processes in terms of the analogy between heat and electrical conduction recently. Heat transfer analyses show that the entransy of an object describes its heat transfer ability, just as the electrical energy in a capacitor describes its charge transfer ability. Entransy dissipation occurs during heat transfer processes, as a measure of the heat transfer irreversibility with the dissipation related thermal resistance. By taking equivalent thermal resistance (it corresponds to the mean temperature difference), which reflects the average heat conduction effect and is defined based on entransy dissipation, as an optimization objective, the 'volume to point' constructal problem is re-analysed and re-optimized in this paper. The constructal shape of the control volume with the best average heat conduction effect is deduced. For the elemental area and the first order construct assembly, when the thermal current density in the high conductive link is linear with the length, the optimized shapes of assembly based on the minimization of entransy dissipation are the same as those based on minimization of the maximum temperature difference, and the mean temperature difference is 2/3 of the maximum temperature difference. For the second and higher order construct assemblies, the thermal current densities in the high conductive link are not linear with the length, and the optimized shapes of the assembly based on the

  19. BACFIRE, Minimal Cut Sets Common Cause Failure Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: BACFIRE, designed to aid in common cause failure analysis, searches among the basic events of a minimal cut set of the system logic model for common potential causes of failure. The potential cause of failure is called a qualitative failure characteristics. The algorithm searches qualitative failure characteristics (that are part of the program input) of the basic events contained in a set to find those characteristics common to all basic events. This search is repeated for all cut sets input to the program. Common cause failure analysis is thereby performed without inclusion of secondary failure in the system logic model. By using BACFIRE, a common cause failure analysis can be added to an existing system safety and reliability analysis. 2 - Method of solution: BACFIRE searches the qualitative failure characteristics of the basic events contained in the fault tree minimal cut set to find those characteristics common to all basic events by either of two criteria. The first criterion can be met if all the basic events in a minimal cut set are associated by a condition which alone may increase the probability of multiple component malfunction. The second criterion is met if all the basic events in a minimal cut set are susceptible to the same secondary failure cause and are located in the same domain for that cause of secondary failure. 3 - Restrictions on the complexity of the problem - Maxima of: 1001 secondary failure maps, 101 basic events, 10 cut sets

  20. On the structure of the set of coincidence points

    Energy Technology Data Exchange (ETDEWEB)

    Arutyunov, A V [Peoples Friendship University of Russia, Moscow (Russian Federation); Gel' man, B D [Voronezh State University (Russian Federation)

    2015-03-31

    We consider the set of coincidence points for two maps between metric spaces. Cardinality, metric and topological properties of the coincidence set are studied. We obtain conditions which guarantee that this set (a) consists of at least two points; (b) consists of at least n points; (c) contains a countable subset; (d) is uncountable. The results are applied to study the structure of the double point set and the fixed point set for multivalued contractions. Bibliography: 12 titles.

  1. Minimal set of auxiliary fields and S-matrix for extended supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Fradkin, E S; Vasiliev, M A [Physical Lebedev Institute - Moscow

    1979-05-19

    Minimal set of auxiliary fields for linearized SO(2) supergravity and one-parameter extension of the minimal auxiliary fields in the SO(1) supergravity are constructed. The expression for the S-matrix in SO(2) supergravity are given.

  2. Pointing with a One-Eyed Cursor for Supervised Training in Minimally Invasive Robotic Surgery

    DEFF Research Database (Denmark)

    Kibsgaard, Martin; Kraus, Martin

    2016-01-01

    Pointing in the endoscopic view of a surgical robot is a natural and effcient way for instructors to communicate with trainees in robot-assisted minimally invasive surgery. However, pointing in a stereo-endoscopic view can be limited by problems such as video delay, double vision, arm fatigue......-day training units in robot- assisted minimally invasive surgery on anaesthetised pigs....

  3. Minimal spanning trees, filaments and galaxy clustering

    International Nuclear Information System (INIS)

    Barrow, J.D.; Sonoda, D.H.

    1985-01-01

    A graph theoretical technique for assessing intrinsic patterns in point data sets is described. A unique construction, the minimal spanning tree, can be associated with any point data set given all the inter-point separations. This construction enables the skeletal pattern of galaxy clustering to be singled out in quantitative fashion and differs from other statistics applied to these data sets. This technique is described and applied to two- and three-dimensional distributions of galaxies and also to comparable random samples and numerical simulations. The observed CfA and Zwicky data exhibit characteristic distributions of edge-lengths in their minimal spanning trees which are distinct from those found in random samples. (author)

  4. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  5. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  6. Simple Approaches to Minimally-Instrumented, Microfluidic-Based Point-of-Care Nucleic Acid Amplification Tests

    Science.gov (United States)

    Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.

    2018-01-01

    Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424

  7. Simple Approaches to Minimally-Instrumented, Microfluidic-Based Point-of-Care Nucleic Acid Amplification Tests

    Directory of Open Access Journals (Sweden)

    Michael G. Mauk

    2018-02-01

    Full Text Available Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC tests for resource-limited settings. Microfluidic cartridges (‘chips’ that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets is demonstrated. Low-cost detection and added functionality (data analysis, control, communication can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed.

  8. Music analysis and point-set compression

    DEFF Research Database (Denmark)

    Meredith, David

    2015-01-01

    COSIATEC, SIATECCompress and Forth’s algorithm are point-set compression algorithms developed for discovering repeated patterns in music, such as themes and motives that would be of interest to a music analyst. To investigate their effectiveness and versatility, these algorithms were evaluated...... on three analytical tasks that depend on the discovery of repeated patterns: classifying folk song melodies into tune families, discovering themes and sections in polyphonic music, and discovering subject and countersubject entries in fugues. Each algorithm computes a compressed encoding of a point......-set representation of a musical object in the form of a list of compact patterns, each pattern being given with a set of vectors indicating its occurrences. However, the algorithms adopt different strategies in their attempts to discover encodings that maximize compression.The best-performing algorithm on the folk...

  9. Automatic Generation of Minimal Cut Sets

    Directory of Open Access Journals (Sweden)

    Sentot Kromodimoeljo

    2015-06-01

    Full Text Available A cut set is a collection of component failure modes that could lead to a system failure. Cut Set Analysis (CSA is applied to critical systems to identify and rank system vulnerabilities at design time. Model checking tools have been used to automate the generation of minimal cut sets but are generally based on checking reachability of system failure states. This paper describes a new approach to CSA using a Linear Temporal Logic (LTL model checker called BT Analyser that supports the generation of multiple counterexamples. The approach enables a broader class of system failures to be analysed, by generalising from failure state formulae to failure behaviours expressed in LTL. The traditional approach to CSA using model checking requires the model or system failure to be modified, usually by hand, to eliminate already-discovered cut sets, and the model checker to be rerun, at each step. By contrast, the new approach works incrementally and fully automatically, thereby removing the tedious and error-prone manual process and resulting in significantly reduced computation time. This in turn enables larger models to be checked. Two different strategies for using BT Analyser for CSA are presented. There is generally no single best strategy for model checking: their relative efficiency depends on the model and property being analysed. Comparative results are given for the A320 hydraulics case study in the Behavior Tree modelling language.

  10. Evaluating Diagnostic Point-of-Care Tests in Resource-Limited Settings

    Science.gov (United States)

    Drain, Paul K; Hyle, Emily P; Noubary, Farzad; Freedberg, Kenneth A; Wilson, Douglas; Bishai, William; Rodriguez, William; Bassett, Ingrid V

    2014-01-01

    Diagnostic point-of-care (POC) testing is intended to minimize the time to obtain a test result, thereby allowing clinicians and patients to make an expeditious clinical decision. As POC tests expand into resource-limited settings (RLS), the benefits must outweigh the costs. To optimize POC testing in RLS, diagnostic POC tests need rigorous evaluations focused on relevant clinical outcomes and operational costs, which differ from evaluations of conventional diagnostic tests. Here, we reviewed published studies on POC testing in RLS, and found no clearly defined metric for the clinical utility of POC testing. Therefore, we propose a framework for evaluating POC tests, and suggest and define the term “test efficacy” to describe a diagnostic test’s capacity to support a clinical decision within its operational context. We also proposed revised criteria for an ideal diagnostic POC test in resource-limited settings. Through systematic evaluations, comparisons between centralized diagnostic testing and novel POC technologies can be more formalized, and health officials can better determine which POC technologies represent valuable additions to their clinical programs. PMID:24332389

  11. Smart Cup: A Minimally-Instrumented, Smartphone-Based Point-of-Care Molecular Diagnostic Device.

    Science.gov (United States)

    Liao, Shih-Chuan; Peng, Jing; Mauk, Michael G; Awasthi, Sita; Song, Jinzhao; Friedman, Harvey; Bau, Haim H; Liu, Changchun

    2016-06-28

    Nucleic acid amplification-based diagnostics offer rapid, sensitive, and specific means for detecting and monitoring the progression of infectious diseases. However, this method typically requires extensive sample preparation, expensive instruments, and trained personnel. All of which hinder its use in resource-limited settings, where many infectious diseases are endemic. Here, we report on a simple, inexpensive, minimally-instrumented, smart cup platform for rapid, quantitative molecular diagnostics of pathogens at the point of care. Our smart cup takes advantage of water-triggered, exothermic chemical reaction to supply heat for the nucleic acid-based, isothermal amplification. The amplification temperature is regulated with a phase-change material (PCM). The PCM maintains the amplification reactor at a constant temperature, typically, 60-65°C, when ambient temperatures range from 12 to 35°C. To eliminate the need for an optical detector and minimize cost, we use the smartphone's flashlight to excite the fluorescent dye and the phone camera to record real-time fluorescence emission during the amplification process. The smartphone can concurrently monitor multiple amplification reactors and analyze the recorded data. Our smart cup's utility was demonstrated by amplifying and quantifying herpes simplex virus type 2 (HSV-2) with LAMP assay in our custom-made microfluidic diagnostic chip. We have consistently detected as few as 100 copies of HSV-2 viral DNA per sample. Our system does not require any lab facilities and is suitable for use at home, in the field, and in the clinic, as well as in resource-poor settings, where access to sophisticated laboratories is impractical, unaffordable, or nonexistent.

  12. Minimization of energy consumption in HVAC systems with data-driven models and an interior-point method

    International Nuclear Information System (INIS)

    Kusiak, Andrew; Xu, Guanglin; Zhang, Zijun

    2014-01-01

    Highlights: • We study the energy saving of HVAC systems with a data-driven approach. • We conduct an in-depth analysis of the topology of developed Neural Network based HVAC model. • We apply interior-point method to solving a Neural Network based HVAC optimization model. • The uncertain building occupancy is incorporated in the minimization of HVAC energy consumption. • A significant potential of saving HVAC energy is discovered. - Abstract: In this paper, a data-driven approach is applied to minimize energy consumption of a heating, ventilating, and air conditioning (HVAC) system while maintaining the thermal comfort of a building with uncertain occupancy level. The uncertainty of arrival and departure rate of occupants is modeled by the Poisson and uniform distributions, respectively. The internal heating gain is calculated from the stochastic process of the building occupancy. Based on the observed and simulated data, a multilayer perceptron algorithm is employed to model and simulate the HVAC system. The data-driven models accurately predict future performance of the HVAC system based on the control settings and the observed historical information. An optimization model is formulated and solved with the interior-point method. The optimization results are compared with the results produced by the simulation models

  13. Denjoy minimal sets and Birkhoff periodic orbits for non-exact monotone twist maps

    Science.gov (United States)

    Qin, Wen-Xin; Wang, Ya-Nan

    2018-06-01

    A non-exact monotone twist map φbarF is a composition of an exact monotone twist map φ bar with a generating function H and a vertical translation VF with VF ((x , y)) = (x , y - F). We show in this paper that for each ω ∈ R, there exists a critical value Fd (ω) ≥ 0 depending on H and ω such that for 0 ≤ F ≤Fd (ω), the non-exact twist map φbarF has an invariant Denjoy minimal set with irrational rotation number ω lying on a Lipschitz graph, or Birkhoff (p , q)-periodic orbits for rational ω = p / q. Like the Aubry-Mather theory, we also construct heteroclinic orbits connecting Birkhoff periodic orbits, and show that quasi-periodic orbits in these Denjoy minimal sets can be approximated by periodic orbits. In particular, we demonstrate that at the critical value F =Fd (ω), the Denjoy minimal set is not uniformly hyperbolic and can be approximated by smooth curves.

  14. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Science.gov (United States)

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  15. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  16. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Directory of Open Access Journals (Sweden)

    Khang Jie Liew

    Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  17. Extreme simplification and rendering of point sets using algebraic multigrid

    NARCIS (Netherlands)

    Reniers, D.; Telea, A.C.

    2009-01-01

    We present a novel approach for extreme simplification of point set models, in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However, this requires using many primitives to render even moderately simple shapes. Often, one

  18. Extreme Simplification and Rendering of Point Sets using Algebraic Multigrid

    NARCIS (Netherlands)

    Reniers, Dennie; Telea, Alexandru

    2005-01-01

    We present a novel approach for extreme simplification of point set models in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However efficient, simple primitives are less effective in approximating large surface areas. A large

  19. Toward the International Classification of Functioning, Disability and Health (ICF) Rehabilitation Set: A Minimal Generic Set of Domains for Rehabilitation as a Health Strategy.

    Science.gov (United States)

    Prodinger, Birgit; Cieza, Alarcos; Oberhauser, Cornelia; Bickenbach, Jerome; Üstün, Tevfik Bedirhan; Chatterji, Somnath; Stucki, Gerold

    2016-06-01

    To develop a comprehensive set of the International Classification of Functioning, Disability and Health (ICF) categories as a minimal standard for reporting and assessing functioning and disability in clinical populations along the continuum of care. The specific aims were to specify the domains of functioning recommended for an ICF Rehabilitation Set and to identify a minimal set of environmental factors (EFs) to be used alongside the ICF Rehabilitation Set when describing disability across individuals and populations with various health conditions. Secondary analysis of existing data sets using regression methods (Random Forests and Group Lasso regression) and expert consultations. Along the continuum of care, including acute, early postacute, and long-term and community rehabilitation settings. Persons (N=9863) with various health conditions participated in primary studies. The number of respondents for whom the dependent variable data were available and used in this analysis was 9264. Not applicable. For regression analyses, self-reported general health was used as a dependent variable. The ICF categories from the functioning component and the EF component were used as independent variables for the development of the ICF Rehabilitation Set and the minimal set of EFs, respectively. Thirty ICF categories to be complemented with 12 EFs were identified as relevant to the identified ICF sets. The ICF Rehabilitation Set constitutes of 9 ICF categories from the component body functions and 21 from the component activities and participation. The minimal set of EFs contains 12 categories spanning all chapters of the EF component of the ICF. The identified sets proposed serve as minimal generic sets of aspects of functioning in clinical populations for reporting data within and across heath conditions, time, clinical settings including rehabilitation, and countries. These sets present a reference framework for harmonizing existing information on disability across

  20. Kelp diagrams : Point set membership visualization

    NARCIS (Netherlands)

    Dinkla, K.; Kreveld, van M.J.; Speckmann, B.; Westenberg, M.A.

    2012-01-01

    We present Kelp Diagrams, a novel method to depict set relations over points, i.e., elements with predefined positions. Our method creates schematic drawings and has been designed to take aesthetic quality, efficiency, and effectiveness into account. This is achieved by a routing algorithm, which

  1. On minimizers of causal variational principles

    International Nuclear Information System (INIS)

    Schiefeneder, Daniela

    2011-01-01

    Causal variational principles are a class of nonlinear minimization problems which arise in a formulation of relativistic quantum theory referred to as the fermionic projector approach. This thesis is devoted to a numerical and analytic study of the minimizers of a general class of causal variational principles. We begin with a numerical investigation of variational principles for the fermionic projector in discrete space-time. It is shown that for sufficiently many space-time points, the minimizing fermionic projector induces non-trivial causal relations on the space-time points. We then generalize the setting by introducing a class of causal variational principles for measures on a compact manifold. In our main result we prove under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed analysis of the minimizers. (orig.)

  2. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  3. A randomized prospective study of desflurane versus isoflurane in minimal flow anesthesia using “equilibration time” as the change-over point to minimal flow

    Science.gov (United States)

    Mallik, Tanuja; Aneja, S; Tope, R; Muralidhar, V

    2012-01-01

    Background: In the administration of minimal flow anesthesia, traditionally a fixed time period of high flow has been used before changing over to minimal flow. However, newer studies have used “equilibration time” of a volatile anesthetic agent as the change-over point. Materials and Methods: A randomized prospective study was conducted on 60 patients, who were divided into two groups of 30 patients each. Two volatile inhalational anesthetic agents were compared. Group I received desflurane (n = 30) and group II isoflurane (n = 30). Both the groups received an initial high flow till equilibration between inspired (Fi) and expired (Fe) agent concentration were achieved, which was defined as Fe/Fi = 0.8. The mean (SD) equilibration time was obtained for both the agent. Then, a drift in end-tidal agent concentration during the minimal flow anesthesia and recovery profile was noted. Results: The mean equilibration time obtained for desflurane and isoflurane were 4.96 ± 1.60 and 16.96 ± 9.64 min (P < 0.001). The drift in end-tidal agent concentration over time was minimal in the desflurane group (P = 0.065). Recovery time was 5.70 ± 2.78 min in the desflurane group and 8.06 ± 31 min in the isoflurane group (P = 0.004). Conclusion: Use of equilibration time of the volatile anesthetic agent as a change-over point, from high flow to minimal flow, can help us use minimal flow anesthesia, in a more efficient way. PMID:23225926

  4. Optimal Set-Point Synthesis in HVAC Systems

    DEFF Research Database (Denmark)

    Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik

    2007-01-01

    This paper presents optimal set-point synthesis for a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger and a water-to-air heat exchanger. The objective function is composed of the electrical power for different...... components, encompassing fans, primary/secondary pump, tertiary pump, and air-to-air heat exchanger wheel; and a fraction of thermal power used by the HVAC system. The goals that have to be achieved by the HVAC system appear as constraints in the optimization problem. To solve the optimization problem......, a steady state model of the HVAC system is derived while different supplying hydronic circuits are studied for the water-to-air heat exchanger. Finally, the optimal set-points and the optimal supplying hydronic circuit are resulted....

  5. Optimal set of selected uranium enrichments that minimizes blending consequences

    International Nuclear Information System (INIS)

    Nachlas, J.A.; Kurstedt, H.A. Jr.; Lobber, J.S. Jr.

    1977-01-01

    Identities, quantities, and costs associated with producing a set of selected enrichments and blending them to provide fuel for existing reactors are investigated using an optimization model constructed with appropriate constraints. Selected enrichments are required for either nuclear reactor fuel standardization or potential uranium enrichment alternatives such as the gas centrifuge. Using a mixed-integer linear program, the model minimizes present worth costs for a 39-product-enrichment reference case. For four ingredients, the marginal blending cost is only 0.18% of the total direct production cost. Natural uranium is not an optimal blending ingredient. Optimal values reappear in most sets of ingredient enrichments

  6. Geometric Spanners for Weighted Point Sets

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Farshi, Mohammad

    2009-01-01

    Let (S,d) be a finite metric space, where each element p ∈ S has a non-negative weight w(p). We study spanners for the set S with respect to weighted distance function d w , where d w (p,q) is w(p) + d(p,q) + wq if p ≠ q and 0 otherwise. We present a general method for turning spanners with respect...... to the d-metric into spanners with respect to the d w -metric. For any given ε> 0, we can apply our method to obtain (5 + ε)-spanners with a linear number of edges for three cases: points in Euclidean space ℝ d , points in spaces of bounded doubling dimension, and points on the boundary of a convex body...... in ℝ d where d is the geodesic distance function. We also describe an alternative method that leads to (2 + ε)-spanners for points in ℝ d and for points on the boundary of a convex body in ℝ d . The number of edges in these spanners is O(nlogn). This bound on the stretch factor is nearly optimal...

  7. Multi-Agent Rendezvousing with a Finite Set of Candidate Rendezvous Points

    NARCIS (Netherlands)

    Fang, J.; Morse, A. S.; Cao, M.

    2008-01-01

    The discrete multi-agent rendezvous problem we consider in this paper is concerned with a specified set of points in the plane, called “dwell-points,” and a set of mobile autonomous agents with limited sensing range. Each agent is initially positioned at some dwell-point, and is able to determine

  8. Free time minimizers for the three-body problem

    Science.gov (United States)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  9. Modelling occupants’ heating set-point prefferences

    DEFF Research Database (Denmark)

    Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn

    2011-01-01

    consumption. Simultaneous measurement of the set-point of thermostatic radiator valves (trv), and indoor and outdoor environment characteristics was carried out in 15 dwellings in Denmark in 2008. Linear regression was used to infer a model of occupants’ interactions with trvs. This model could easily...... be implemented in most simulation software packages to increase the validity of the simulation outcomes....

  10. Power Trip Set-points of Reactor Protection System for New Research Reactor

    International Nuclear Information System (INIS)

    Lee, Byeonghee; Yang, Soohyung

    2013-01-01

    This paper deals with the trip set-point related to the reactor power considering the reactivity induced accident (RIA) of new research reactor. The possible scenarios of reactivity induced accidents were simulated and the effects of trip set-point on the critical heat flux ratio (CHFR) were calculated. The proper trip set-points which meet the acceptance criterion and guarantee sufficient margins from normal operation were then determined. The three different trip set-points related to the reactor power are determined based on the RIA of new research reactor during FP condition, over 0.1%FP and under 0.1%FP. Under various reactivity insertion rates, the CHFR are calculated and checked whether they meet the acceptance criterion. For RIA at FP condition, the acceptance criterion can be satisfied even if high power set-point is only used for reactor trip. Since the design of the reactor is still progressing and need a safety margin for possible design changes, 18 MW is recommended as a high power set-point. For RIA at 0.1%FP, high power setpoint of 18 MW and high log rate of 10%pp/s works well and acceptance criterion is satisfied. For under 0.1% FP operations, the application of high log rate is necessary for satisfying the acceptance criterion. Considering possible decrease of CHFR margin due to design changes, the high log rate is suggested to be 8%pp/s. Suggested trip set-points have been identified based on preliminary design data for new research reactor; therefore, these trip set-points will be re-established by considering design progress of the reactor. The reactor protection system (RPS) of new research reactor is designed for safe shutdown of the reactor and preventing the release of radioactive material to environment. The trip set point of RPS is essential for reactor safety, therefore should be determined to mitigate the consequences from accidents. At the same time, the trip set-point should secure margins from normal operational condition to avoid

  11. Decision Optimization of Machine Sets Taking Into Consideration Logical Tree Minimization of Design Guidelines

    Science.gov (United States)

    Deptuła, A.; Partyka, M. A.

    2014-08-01

    The method of minimization of complex partial multi-valued logical functions determines the degree of importance of construction and exploitation parameters playing the role of logical decision variables. Logical functions are taken into consideration in the issues of modelling machine sets. In multi-valued logical functions with weighting products, it is possible to use a modified Quine - McCluskey algorithm of multi-valued functions minimization. Taking into account weighting coefficients in the logical tree minimization reflects a physical model of the object being analysed much better

  12. Assessment of Thailand indoor set-point impact on energy consumption and environment

    International Nuclear Information System (INIS)

    Yamtraipat, N.; Khedari, J.; Hirunlabh, J.; Kunchornrat, J.

    2006-01-01

    The paper presents an investigation of indoor set-point standard of air-conditioned spaces as a tool to control electrical energy consumption of air-conditioners in Thailand office buildings and to reduce air pollutants. One hundred and forty-seven air-conditioned rooms in 13 buildings nationwide were used as models to analyze the electricity consumption of air-conditioning systems according to their set indoor temperatures, which were below the standard set-point and were accounted into a large scale. Then, the electrical energy and environmental saving potentials in the country were assessed by the assumption that adaptation of indoor set-point temperature is increased up to the standard set-point of 26 o C. It was concluded that the impacts of indoor set-point of air-conditioned rooms, set at 26 o C, on energy saving and on environment are as follows: The overall electricity consumption saving would be 804.60 GWh/year, which would reduce the corresponding GHGs emissions (mainly CO 2 ) from power plant by 579.31x10 3 tons/year

  13. Function of One Regular Separable Relation Set Decided for the Minimal Covering in Multiple Valued Logic

    Directory of Open Access Journals (Sweden)

    Liu Yu Zhen

    2016-01-01

    Full Text Available Multiple-valued logic is an important branch of the computer science and technology. Multiple-valued logic studies the theory, multiple-valued circuit & multiple-valued system, and the applications of multiple-valued logic included.In the theory of multiple-valued logic, one primary and important problem is the completeness of function sets, which can be solved depending on the decision for all the precomplete sets(also called maximal closed sets of K-valued function sets noted by PK*, and another is the decision for Sheffer function, which can be totally solved by picking out all of the minimal covering of the precomplete sets. In the function structure theory of multi-logic, decision on Sheffer function is an important role. It contains structure and decision of full multi-logic and partial multi-logic. Its decision is closely related to decision of completeness of function which can be done by deciding the minimal covering of full multi-logic and partial-logic. By theory of completeness of partial multi-logic, we prove that function of one regular separable relation is not minimal covering of PK* under the condition of m = 2, σ = e.

  14. Robust surface registration using N-points approximate congruent sets

    Directory of Open Access Journals (Sweden)

    Yao Jian

    2011-01-01

    Full Text Available Abstract Scans acquired by 3D sensors are typically represented in a local coordinate system. When multiple scans, taken from different locations, represent the same scene these must be registered to a common reference frame. We propose a fast and robust registration approach to automatically align two scans by finding two sets of N-points, that are approximately congruent under rigid transformation and leading to a good estimate of the transformation between their corresponding point clouds. Given two scans, our algorithm randomly searches for the best sets of congruent groups of points using a RANSAC-based approach. To successfully and reliably align two scans when there is only a small overlap, we improve the basic RANSAC random selection step by employing a weight function that approximates the probability of each pair of points in one scan to match one pair in the other. The search time to find pairs of congruent sets of N-points is greatly reduced by employing a fast search codebook based on both binary and multi-dimensional lookup tables. Moreover, we introduce a novel indicator of the overlapping region quality which is used to verify the estimated rigid transformation and to improve the alignment robustness. Our framework is general enough to incorporate and efficiently combine different point descriptors derived from geometric and texture-based feature points or scene geometrical characteristics. We also present a method to improve the matching effectiveness of texture feature descriptors by extracting them from an atlas of rectified images recovered from the scan reflectance image. Our algorithm is robust with respect to different sampling densities and also resilient to noise and outliers. We demonstrate its robustness and efficiency on several challenging scan datasets with varying degree of noise, outliers, extent of overlap, acquired from indoor and outdoor scenarios.

  15. Reevaluation of the PMS alarm set-points in OPR1000

    International Nuclear Information System (INIS)

    Roh, Kyung Ho; Yang, Sung Tae; Jung, Sung In

    2011-01-01

    In Optimized Power Reactor 1000 (OPR1000), the common alarm of the plant monitoring system (PMS), which is related to the channel-to-channel core protection calculator (CPC), experiences frequent deviations in the departure from nucleate boiling ratio (DNBR) and the local power density (LPD) between the middle of the cycle and the end of the cycle. Because the channel-to-channel CPC causes deviations in the values of the DNBR and LPD, the increase in the CPC input variables exceeds the alarm set-points. The CPC DNBR and LPD are defined as follows: Deviation = the average of four CPC channels of the LPD and DNBR minus the value of each CPC channel LPD and DNBR. In this paper, we report on a review by the Korea Hydro and Nuclear Power Co., Ltd. (KHNP) regarding the suitability of the alarm set-points for the channel-tochannel deviations of the CPC DNBR and LPD. The set-points were revaluated in light of operational experience and the case of Palo Verde (which is the reference model of OPR1000). The KHNP consequently revised the relevant procedures, as well as and the PMS alarm set-points, as part of its follow-up action

  16. Developing Common Set of Weights with Considering Nondiscretionary Inputs and Using Ideal Point Method

    Directory of Open Access Journals (Sweden)

    Reza Kiani Mavi

    2013-01-01

    Full Text Available Data envelopment analysis (DEA is used to evaluate the performance of decision making units (DMUs with multiple inputs and outputs in a homogeneous group. In this way, the acquired relative efficiency score for each decision making unit lies between zero and one where a number of them may have an equal efficiency score of one. DEA successfully divides them into two categories of efficient DMUs and inefficient DMUs. A ranking for inefficient DMUs is given but DEA does not provide further information about the efficient DMUs. One of the popular methods for evaluating and ranking DMUs is the common set of weights (CSW method. We generate a CSW model with considering nondiscretionary inputs that are beyond the control of DMUs and using ideal point method. The main idea of this approach is to minimize the distance between the evaluated decision making unit and the ideal decision making unit (ideal point. Using an empirical example we put our proposed model to test by applying it to the data of some 20 bank branches and rank their efficient units.

  17. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    Science.gov (United States)

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  18. Inverse consistent non-rigid image registration based on robust point set matching

    Science.gov (United States)

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number

  19. Life satisfaction set point: stability and change.

    Science.gov (United States)

    Fujita, Frank; Diener, Ed

    2005-01-01

    Using data from 17 years of a large and nationally representative panel study from Germany, the authors examined whether there is a set point for life satisfaction (LS)--stability across time, even though it can be perturbed for short periods by life events. The authors found that 24% of respondents changed significantly in LS from the first 5 years to the last 5 years and that stability declined as the period between measurements increased. Average LS in the first 5 years correlated .51 with the 5-year average of LS during the last 5 years. Height, weight, body mass index, systolic and diastolic blood pressure, and personality traits were all more stable than LS, whereas income was about as stable as LS. Almost 9% of the sample changed an average of 3 or more points on a 10-point scale from the first 5 to last 5 years of the study.

  20. Counting convex polygons in planar point sets

    NARCIS (Netherlands)

    Mitchell, J.S.B.; Rote, G.; Sundaram, Gopalakrishnan; Woeginger, G.J.

    1995-01-01

    Given a set S of n points in the plane, we compute in time O(n3) the total number of convex polygons whose vertices are a subset of S. We give an O(m · n3) algorithm for computing the number of convex k-gons with vertices in S, for all values k = 3,…, m; previously known bounds were exponential

  1. Discretized energy minimization in a wave guide with point sources

    Science.gov (United States)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  2. Statistical MOSFET Parameter Extraction with Parameter Selection for Minimal Point Measurement

    Directory of Open Access Journals (Sweden)

    Marga Alisjahbana

    2013-11-01

    Full Text Available A method to statistically extract MOSFET model parameters from a minimal number of transistor I(V characteristic curve measurements, taken during fabrication process monitoring. It includes a sensitivity analysis of the model, test/measurement point selection, and a parameter extraction experiment on the process data. The actual extraction is based on a linear error model, the sensitivity of the MOSFET model with respect to the parameters, and Newton-Raphson iterations. Simulated results showed good accuracy of parameter extraction and I(V curve fit for parameter deviations of up 20% from nominal values, including for a process shift of 10% from nominal.

  3. Method of nuclear reactor control using a variable temperature load dependent set point

    International Nuclear Information System (INIS)

    Kelly, J.J.; Rambo, G.E.

    1982-01-01

    A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow

  4. Comparison of construction algorithms for minimal, acyclic, deterministic, finite-state automata from sets of strings

    NARCIS (Netherlands)

    Daciuk, J; Champarnaud, JM; Maurel, D

    2003-01-01

    This paper compares various methods for constructing minimal, deterministic, acyclic, finite-state automata (recognizers) from sets of words. Incremental, semi-incremental, and non-incremental methods have been implemented and evaluated.

  5. Gap-minimal systems of notations and the constructible hierarchy

    Science.gov (United States)

    Lucian, M. L.

    1972-01-01

    If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.

  6. Computing half-plane and strip discrepancy of planar point sets

    NARCIS (Netherlands)

    Berg, de M.

    1996-01-01

    We present efficient algorithms for two problems concerning the discrepancy of a set S of n points in the unit square in the plane. First, we describe an algorithm for maintaining the half-plane discrepancy of S under insertions and deletions of points. The algorithm runs in O(nlogn) worst-case time

  7. HIV-1 transmitting couples have similar viral load set-points in Rakai, Uganda.

    Directory of Open Access Journals (Sweden)

    T Déirdre Hollingsworth

    2010-05-01

    Full Text Available It has been hypothesized that HIV-1 viral load set-point is a surrogate measure of HIV-1 viral virulence, and that it may be subject to natural selection in the human host population. A key test of this hypothesis is whether viral load set-points are correlated between transmitting individuals and those acquiring infection. We retrospectively identified 112 heterosexual HIV-discordant couples enrolled in a cohort in Rakai, Uganda, in which HIV transmission was suspected and viral load set-point was established. In addition, sequence data was available to establish transmission by genetic linkage for 57 of these couples. Sex, age, viral subtype, index partner, and self-reported genital ulcer disease status (GUD were known. Using ANOVA, we estimated the proportion of variance in viral load set-points which was explained by the similarity within couples (the 'couple effect'. Individuals with suspected intra-couple transmission (97 couples had similar viral load set-points (p = 0.054 single factor model, p = 0.0057 adjusted and the couple effect explained 16% of variance in viral loads (23% adjusted. The analysis was repeated for a subset of 29 couples with strong genetic support for transmission. The couple effect was the major determinant of viral load set-point (p = 0.067 single factor, and p = 0.036 adjusted and the size of the effect was 27% (37% adjusted. Individuals within epidemiologically linked couples with genetic support for transmission had similar viral load set-points. The most parsimonious explanation is that this is due to shared characteristics of the transmitted virus, a finding which sheds light on both the role of viral factors in HIV-1 pathogenesis and on the evolution of the virus.

  8. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    Science.gov (United States)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  9. Reevaluation of steam generator level trip set point

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yoon Sub; Soh, Dong Sub; Kim, Sung Oh; Jung, Se Won; Sung, Kang Sik; Lee, Joon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    The reactor trip by the low level of steam generator water accounts for a substantial portion of reactor scrams in a nuclear plant and the feasibility of modification of the steam generator water level trip system of YGN 1/2 was evaluated in this study. The study revealed removal of the reactor trip function from the SG water level trip system is not possible because of plant safety but relaxation of the trip set point by 9 % is feasible. The set point relaxation requires drilling of new holes for level measurement to operating steam generators. Characteristics of negative neutron flux rate trip and reactor trip were also reviewed as an additional work. Since the purpose of the trip system modification for reduction of a reactor scram frequency is not to satisfy legal requirements but to improve plant performance and the modification yields positive and negative aspects, the decision of actual modification needs to be made based on the results of this study and also the policy of a plant owner. 37 figs, 6 tabs, 14 refs. (Author).

  10. Systems biology perspectives on minimal and simpler cells.

    Science.gov (United States)

    Xavier, Joana C; Patil, Kiran Raosaheb; Rocha, Isabel

    2014-09-01

    The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  11. Systems Biology Perspectives on Minimal and Simpler Cells

    Science.gov (United States)

    Xavier, Joana C.; Patil, Kiran Raosaheb

    2014-01-01

    SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563

  12. Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.

    Science.gov (United States)

    Saller, Maximilian A C; Habershon, Scott

    2017-07-11

    Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.

  13. CD-Based Microfluidics for Primary Care in Extreme Point-of-Care Settings

    Directory of Open Access Journals (Sweden)

    Suzanne Smith

    2016-01-01

    Full Text Available We review the utility of centrifugal microfluidic technologies applied to point-of-care diagnosis in extremely under-resourced environments. The various challenges faced in these settings are showcased, using areas in India and Africa as examples. Measures for the ability of integrated devices to effectively address point-of-care challenges are highlighted, and centrifugal, often termed CD-based microfluidic technologies, technologies are presented as a promising platform to address these challenges. We describe the advantages of centrifugal liquid handling, as well as the ability of a standard CD player to perform a number of common laboratory tests, fulfilling the role of an integrated lab-on-a-CD. Innovative centrifugal approaches for point-of-care in extremely resource-poor settings are highlighted, including sensing and detection strategies, smart power sources and biomimetic inspiration for environmental control. The evolution of centrifugal microfluidics, along with examples of commercial and advanced prototype centrifugal microfluidic systems, is presented, illustrating the success of deployment at the point-of-care. A close fit of emerging centrifugal systems to address a critical panel of tests for under-resourced clinic settings, formulated by medical experts, is demonstrated. This emphasizes the potential of centrifugal microfluidic technologies to be applied effectively to extremely challenging point-of-care scenarios and in playing a role in improving primary care in resource-limited settings across the developing world.

  14. On almost-periodic points of a topological Markov chain

    International Nuclear Information System (INIS)

    Bogatyi, Semeon A; Redkozubov, Vadim V

    2012-01-01

    We prove that a transitive topological Markov chain has almost-periodic points of all D-periods. Moreover, every D-period is realized by continuously many distinct minimal sets. We give a simple constructive proof of the result which asserts that any transitive topological Markov chain has periodic points of almost all periods, and study the structure of the finite set of positive integers that are not periods.

  15. Some fixed point theorems on non-convex sets

    Directory of Open Access Journals (Sweden)

    Mohanasundaram Radhakrishnan

    2017-10-01

    Full Text Available In this paper, we prove that if $K$ is a nonempty weakly compact set in a Banach space $X$, $T:K\\to K$ is a nonexpansive map satisfying $\\frac{x+Tx}{2}\\in K$ for all $x\\in K$ and if $X$ is $3-$uniformly convex or $X$ has the Opial property, then $T$ has a fixed point in $K.$

  16. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  17. Generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators with applications in non-compact settings and minimization problems

    Directory of Open Access Journals (Sweden)

    Chowdhury Molhammad SR

    2000-01-01

    Full Text Available Results are obtained on existence theorems of generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators in both compact and non-compact settings. We shall use the concept of escaping sequences introduced by Border (Fixed Point Theorem with Applications to Economics and Game Theory, Cambridge University Press, Cambridge, 1985 to obtain results in non-compact settings. Existence theorems on non-compact generalized bi-complementarity problems for quasi-semi-monotone and bi-quasi-semi-monotone operators are also obtained. Moreover, as applications of some results of this paper on generalized bi-quasi-variational inequalities, we shall obtain existence of solutions for some kind of minimization problems with quasi- semi-monotone and bi-quasi-semi-monotone operators.

  18. Differential calculus on the space of Steiner minimal trees in Riemannian manifolds

    International Nuclear Information System (INIS)

    Ivanov, A O; Tuzhilin, A A

    2001-01-01

    It is proved that the length of a minimal spanning tree, the length of a Steiner minimal tree, and the Steiner ratio regarded as functions of finite subsets of a connected complete Riemannian manifold have directional derivatives in all directions. The derivatives of these functions are calculated and some properties of their critical points are found. In particular, a geometric criterion for a finite set to be critical for the Steiner ratio is found. This criterion imposes essential restrictions on the geometry of the sets for which the Steiner ratio attains its minimum, that is, the sets on which the Steiner ratio of the boundary set is equal to the Steiner ratio of the ambient space

  19. A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets

    Directory of Open Access Journals (Sweden)

    Vilius Matiukas

    2011-08-01

    Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English

  20. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  1. Arithmetically Cohen-Macaulay sets of points in P^1 x P^1

    CERN Document Server

    Guardo, Elena

    2015-01-01

    This brief presents a solution to the interpolation problem for arithmetically Cohen-Macaulay (ACM) sets of points in the multiprojective space P^1 x P^1.  It collects the various current threads in the literature on this topic with the aim of providing a self-contained, unified introduction while also advancing some new ideas.  The relevant constructions related to multiprojective spaces are reviewed first, followed by the basic properties of points in P^1 x P^1, the bigraded Hilbert function, and ACM sets of points.  The authors then show how, using a combinatorial description of ACM points in P^1 x P^1, the bigraded Hilbert function can be computed and, as a result, solve the interpolation problem.  In subsequent chapters, they consider fat points and double points in P^1 x P^1 and demonstrate how to use their results to answer questions and problems of interest in commutative algebra.  Throughout the book, chapters end with a brief historical overview, citations of related results, and, where relevan...

  2. The descriptive set-theoretic complexity of the set of points of continuity of a multi-valued function (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Vassilios Gregoriades

    2010-06-01

    Full Text Available In this article we treat a notion of continuity for a multi-valued function F and we compute the descriptive set-theoretic complexity of the set of all x for which F is continuous at x. We give conditions under which the latter set is either a G_delta set or the countable union of G_delta sets. Also we provide a counterexample which shows that the latter result is optimum under the same conditions. Moreover we prove that those conditions are necessary in order to obtain that the set of points of continuity of F is Borel i.e., we show that if we drop some of the previous conditions then there is a multi-valued function F whose graph is a Borel set and the set of points of continuity of F is not a Borel set. Finally we give some analogue results regarding a stronger notion of continuity for a multi-valued function. This article is motivated by a question of M. Ziegler in "Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability with Applications to Linear Algebra", (submitted.

  3. Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging

    Science.gov (United States)

    Lin, Bingxiong; Sun, Yu; Qian, Xiaoning

    2013-03-01

    Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.

  4. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  5. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  6. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    Science.gov (United States)

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  7. Set-Point Theory and personality development : Reconciliation of a paradox

    NARCIS (Netherlands)

    Ormel, Johan; Von Korff, Michael; Jeronimus, Bertus F.; Riese, Harriette; Specht, Jule

    Set-point trait theories presume homeostasis at a specified level (stability/trait) and a surrounding “bandwidth” (change/state). The theory has been productively applied in studies on subjective well-being (SWB) but hardly in research on stability and change in personality (e.g. neuroticism,

  8. Minimal Blocking Sets in PG(2, 8) and Maximal Partial Spreads in PG(3, 8)

    DEFF Research Database (Denmark)

    Barat, Janos

    2004-01-01

    We prove that PG(2, 8) does not contain minimal blocking sets of size 14. Using this result we prove that 58 is the largest size for a maximal partial spread of PG(3, 8). This supports the conjecture that q2-q+ 2 is the largest size for a maximal partial spread of PG(3, q), q>7....

  9. Colouring the triangles determined by a point set

    Directory of Open Access Journals (Sweden)

    Ruy Fabila-Monroy

    2012-05-01

    Full Text Available Let P be a set of n points in general position in the plane. We study the chromatic number of the intersection graph of the open triangles determined by P. It is known that this chromatic number is at least n3/27+O(n2 and, if P is in convex position, the answer is n3/24+O(n2. We prove that for arbitrary P, the chromatic number is at most n3/19.259+O(n2.

  10. Stability of the Minimizers of Least Squares with a Non-Convex Regularization. Part I: Local Behavior

    International Nuclear Information System (INIS)

    Durand, S.; Nikolova, M.

    2006-01-01

    Many estimation problems amount to minimizing a piecewise C m objective function, with m ≥ 2, composed of a quadratic data-fidelity term and a general regularization term. It is widely accepted that the minimizers obtained using non-convex and possibly non-smooth regularization terms are frequently good estimates. However, few facts are known on the ways to control properties of these minimizers. This work is dedicated to the stability of the minimizers of such objective functions with respect to variations of the data. It consists of two parts: first we consider all local minimizers, whereas in a second part we derive results on global minimizers. In this part we focus on data points such that every local minimizer is isolated and results from a C m-1 local minimizer function, defined on some neighborhood. We demonstrate that all data points for which this fails form a set whose closure is negligible

  11. On the Level Set of a Function with Degenerate Minimum Point

    Directory of Open Access Journals (Sweden)

    Yasuhiko Kamiyama

    2015-01-01

    Full Text Available For n≥2, let M be an n-dimensional smooth closed manifold and f:M→R a smooth function. We set minf(M=m and assume that m is attained by unique point p∈M such that p is a nondegenerate critical point. Then the Morse lemma tells us that if a is slightly bigger than m, f-1(a is diffeomorphic to Sn-1. In this paper, we relax the condition on p from being nondegenerate to being an isolated critical point and obtain the same consequence. Some application to the topology of polygon spaces is also included.

  12. Music analysis and point-set compression

    DEFF Research Database (Denmark)

    Meredith, David

    A musical analysis represents a particular way of understanding certain aspects of the structure of a piece of music. The quality of an analysis can be evaluated to some extent by the degree to which knowledge of it improves performance on tasks such as mistake spotting, memorising a piece...... as the minimum description length principle and relates closely to certain ideas in the theory of Kolmogorov complexity. Inspired by this general principle, the hypothesis explored in this paper is that the best ways of understanding (or explanations for) a piece of music are those that are represented...... by the shortest possible descriptions of the piece. With this in mind, two compression algorithms are presented, COSIATEC and SIATECCompress. Each of these algorithms takes as input an in extenso description of a piece of music as a set of points in pitch-time space representing notes. Each algorithm...

  13. TREDRA, Minimal Cut Sets Fault Tree Plot Program

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: TREDRA is a computer program for drafting report-quality fault trees. The input to TREDRA is similar to input for standard computer programs that find minimal cut sets from fault trees. Output includes fault tree plots containing all standard fault tree logic and event symbols, gate and event labels, and an output description for each event in the fault tree. TREDRA contains the following features: a variety of program options that allow flexibility in the program output; capability for automatic pagination of the output fault tree, when necessary; input groups which allow labeling of gates, events, and their output descriptions; a symbol library which includes standard fault tree symbols plus several less frequently used symbols; user control of character size and overall plot size; and extensive input error checking and diagnostic oriented output. 2 - Method of solution: Fault trees are generated by user-supplied control parameters and a coded description of the fault tree structure consisting of the name of each gate, the gate type, the number of inputs to the gate, and the names of these inputs. 3 - Restrictions on the complexity of the problem: TREDRA can produce fault trees with a minimum of 3 and a maximum of 56 levels. The width of each level may range from 3 to 37. A total of 50 transfers is allowed during pagination

  14. 78 FR 24816 - Pricing for the 2013 American Eagle West Point Two-Coin Silver Set

    Science.gov (United States)

    2013-04-26

    ... DEPARTMENT OF THE TREASURY United States Mint Pricing for the 2013 American Eagle West Point Two-Coin Silver Set AGENCY: United States Mint, Department of the Treasury. ACTION: Notice. SUMMARY: The United States Mint is announcing the price of the 2013 American Eagle West Point Two-Coin Silver Set. The...

  15. Robust set-point regulation for ecological models with multiple management goals.

    Science.gov (United States)

    Guiver, Chris; Mueller, Markus; Hodgson, Dave; Townley, Stuart

    2016-05-01

    Population managers will often have to deal with problems of meeting multiple goals, for example, keeping at specific levels both the total population and population abundances in given stage-classes of a stratified population. In control engineering, such set-point regulation problems are commonly tackled using multi-input, multi-output proportional and integral (PI) feedback controllers. Building on our recent results for population management with single goals, we develop a PI control approach in a context of multi-objective population management. We show that robust set-point regulation is achieved by using a modified PI controller with saturation and anti-windup elements, both described in the paper, and illustrate the theory with examples. Our results apply more generally to linear control systems with positive state variables, including a class of infinite-dimensional systems, and thus have broader appeal.

  16. On the uniqueness of minimizers for a class of variational problems with Polyconvex integrand

    KAUST Repository

    Awi, Romeo

    2017-02-05

    We prove existence and uniqueness of minimizers for a family of energy functionals that arises in Elasticity and involves polyconvex integrands over a certain subset of displacement maps. This work extends previous results by Awi and Gangbo to a larger class of integrands. First, we study these variational problems over displacements for which the determinant is positive. Second, we consider a limit case in which the functionals are degenerate. In that case, the set of admissible displacements reduces to that of incompressible displacements which are measure preserving maps. Finally, we establish that the minimizer over the set of incompressible maps may be obtained as a limit of minimizers corresponding to a sequence of minimization problems over general displacements provided we have enough regularity on the dual problems. We point out that these results defy the direct methods of the calculus of variations.

  17. Eigenstrain as a mechanical set-point of cells.

    Science.gov (United States)

    Lin, Shengmao; Lampi, Marsha C; Reinhart-King, Cynthia A; Tsui, Gary; Wang, Jian; Nelson, Carl A; Gu, Linxia

    2018-02-05

    Cell contraction regulates how cells sense their mechanical environment. We sought to identify the set-point of cell contraction, also referred to as tensional homeostasis. In this work, bovine aortic endothelial cells (BAECs), cultured on substrates with different stiffness, were characterized using traction force microscopy (TFM). Numerical models were developed to provide insights into the mechanics of cell-substrate interactions. Cell contraction was modeled as eigenstrain which could induce isometric cell contraction without external forces. The predicted traction stresses matched well with TFM measurements. Furthermore, our numerical model provided cell stress and displacement maps for inspecting the fundamental regulating mechanism of cell mechanosensing. We showed that cell spread area, traction force on a substrate, as well as the average stress of a cell were increased in response to a stiffer substrate. However, the cell average strain, which is cell type-specific, was kept at the same level regardless of the substrate stiffness. This indicated that the cell average strain is the tensional homeostasis that each type of cell tries to maintain. Furthermore, cell contraction in terms of eigenstrain was found to be the same for both BAECs and fibroblast cells in different mechanical environments. This implied a potential mechanical set-point across different cell types. Our results suggest that additional measurements of contractility might be useful for monitoring cell mechanosensing as well as dynamic remodeling of the extracellular matrix (ECM). This work could help to advance the understanding of the cell-ECM relationship, leading to better regenerative strategies.

  18. Steiner minimal trees in small neighbourhoods of points in Riemannian manifolds

    Science.gov (United States)

    Chikin, V. M.

    2017-07-01

    In contrast to the Euclidean case, almost no Steiner minimal trees with concrete boundaries on Riemannian manifolds are known. A result describing the types of Steiner minimal trees on a Riemannian manifold for arbitrary small boundaries is obtained. As a consequence, it is shown that for sufficiently small regular n-gons with n≥ 7 their boundaries without a longest side are Steiner minimal trees. Bibliography: 22 titles.

  19. Minimal Gromov-Witten rings

    International Nuclear Information System (INIS)

    Przyjalkowski, V V

    2008-01-01

    We construct an abstract theory of Gromov-Witten invariants of genus 0 for quantum minimal Fano varieties (a minimal class of varieties which is natural from the quantum cohomological viewpoint). Namely, we consider the minimal Gromov-Witten ring: a commutative algebra whose generators and relations are of the form used in the Gromov-Witten theory of Fano varieties (of unspecified dimension). The Gromov-Witten theory of any quantum minimal variety is a homomorphism from this ring to C. We prove an abstract reconstruction theorem which says that this ring is isomorphic to the free commutative ring generated by 'prime two-pointed invariants'. We also find solutions of the differential equation of type DN for a Fano variety of dimension N in terms of the generating series of one-pointed Gromov-Witten invariants

  20. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  1. A REST Service for Triangulation of Point Sets Using Oriented Matroids

    Directory of Open Access Journals (Sweden)

    José Antonio Valero Medina

    2014-05-01

    Full Text Available This paper describes the implementation of a prototype REST service for triangulation of point sets collected by mobile GPS receivers. The first objective of this paper is to test functionalities of an application, which exploits mobile devices’ capabilities to get data associated with their spatial location. A triangulation of a set of points provides a mechanism through which it is possible to produce an accurate representation of spatial data. Such triangulation may be used for representing surfaces by Triangulated Irregular Networks (TINs, and for decomposing complex two-dimensional spatial objects into simpler geometries. The second objective of this paper is to promote the use of oriented matroids for finding alternative solutions to spatial data processing and analysis tasks. This study focused on the particular case of the calculation of triangulations based on oriented matroids. The prototype described in this paper used a wrapper to integrate and expose several tools previously implemented in C++.

  2. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    Science.gov (United States)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.

  3. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    International Nuclear Information System (INIS)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest. (paper)

  4. Convex Minimization with Constraints of Systems of Variational Inequalities, Mixed Equilibrium, Variational Inequality, and Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We introduce and analyze one iterative algorithm by hybrid shrinking projection method for finding a solution of the minimization problem for a convex and continuously Fréchet differentiable functional, with constraints of several problems: finitely many generalized mixed equilibrium problems, finitely many variational inequalities, the general system of variational inequalities and the fixed point problem of an asymptotically strict pseudocontractive mapping in the intermediate sense in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another iterative algorithm by hybrid shrinking projection method for finding a fixed point of infinitely many nonexpansive mappings with the same constraints, and derive its strong convergence under mild assumptions.

  5. EVALUATION OF SETTING TIME OF MINERAL TRIOXIDE AGGREGATE AND BIODENTINE IN THE PRESENCE OF HUMAN BLOOD AND MINIMAL ESSENTIAL MEDIA - AN IN VITRO STUDY

    Directory of Open Access Journals (Sweden)

    Gopi Krishna Reddy Moosani

    2017-12-01

    Full Text Available BACKGROUND The aim of this study was to compare the ability of MTA and Biodentine to set in the presence of human blood and minimal essential media. MATERIALS AND METHODS Eighty 1 x 3 inches plexi glass sheets were taken. In each sheet, 10 wells were created and divided into 10 groups. Odd number groups were filled with MTA and even groups were filled with Biodentine. Within these groups 4 groups were control groups and the remaining 6 groups were experimental groups (i.e., blood, minimal essential media, blood and minimal essential media. Each block was submerged for 4, 5, 6, 8, 24, 36, and 48 hours in an experimental liquid at 370C with 100% humidity. RESULTS The setting times varied for the 2 materials, with contrasting differences in the setting times between MTA and Biodentine samples. Majority of the MTA samples did not set until 24 hrs. but at 36 hours all the samples of MTA are set. While for Biodentine samples, all of them had set by 6 hours. There is a significant difference in setting time between MTA and Biodentine. CONCLUSION This outcome draws into question the proposed setting time given by each respective manufacturer. Furthermore, despite Biodentine being marketed as a direct competitor to MTA with superior handling properties, MTA consistently set at a faster rate under the conditions of this study.

  6. Keypoint-based 4-Points Congruent Sets - Automated marker-less registration of laser scans

    Science.gov (United States)

    Theiler, Pascal Willy; Wegner, Jan Dirk; Schindler, Konrad

    2014-10-01

    We propose a method to automatically register two point clouds acquired with a terrestrial laser scanner without placing any markers in the scene. What makes this task challenging are the strongly varying point densities caused by the line-of-sight measurement principle, and the huge amount of data. The first property leads to low point densities in potential overlap areas with scans taken from different viewpoints while the latter calls for highly efficient methods in terms of runtime and memory requirements. A crucial yet largely unsolved step is the initial coarse alignment of two scans without any simplifying assumptions, that is, point clouds are given in arbitrary local coordinates and no knowledge about their relative orientation is available. Once coarse alignment has been solved, scans can easily be fine-registered with standard methods like least-squares surface or Iterative Closest Point matching. In order to drastically thin out the original point clouds while retaining characteristic features, we resort to extracting 3D keypoints. Such clouds of keypoints, which can be viewed as a sparse but nevertheless discriminative representation of the original scans, are then used as input to a very efficient matching method originally developed in computer graphics, called 4-Points Congruent Sets (4PCS) algorithm. We adapt the 4PCS matching approach to better suit the characteristics of laser scans. The resulting Keypoint-based 4-Points Congruent Sets (K-4PCS) method is extensively evaluated on challenging indoor and outdoor scans. Beyond the evaluation on real terrestrial laser scans, we also perform experiments with simulated indoor scenes, paying particular attention to the sensitivity of the approach with respect to highly symmetric scenes.

  7. Spectral properties of minimal-basis-set orbitals: Implications for molecular electronic continuum states

    Science.gov (United States)

    Langhoff, P. W.; Winstead, C. L.

    Early studies of the electronically excited states of molecules by John A. Pople and coworkers employing ab initio single-excitation configuration interaction (SECI) calculations helped to simulate related applications of these methods to the partial-channel photoionization cross sections of polyatomic molecules. The Gaussian representations of molecular orbitals adopted by Pople and coworkers can describe SECI continuum states when sufficiently large basis sets are employed. Minimal-basis virtual Fock orbitals stabilized in the continuous portions of such SECI spectra are generally associated with strong photoionization resonances. The spectral attributes of these resonance orbitals are illustrated here by revisiting previously reported experimental and theoretical studies of molecular formaldehyde (H2CO) in combination with recently calculated continuum orbital amplitudes.

  8. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    Science.gov (United States)

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  9. Influence of occupant's heating set-point preferences on indoor environmental quality and heating demand in residential buildings

    DEFF Research Database (Denmark)

    Fabi, Valentina; Corgnati, Stefano Paolo; Andersen, Rune Korsholm

    2013-01-01

    of energy consumption. The aim was to compare the obtained results with a traditional deterministic use of the simulation program. Based on heating set-point behavior of 13 Danish dwellings, logistic regression was used to infer the probability of adjusting the set-point of thermostatic radiator valves...

  10. Set points, settling points and some alternative models: theoretical options to understand how genes and environments combine to regulate body adiposity

    Directory of Open Access Journals (Sweden)

    John R. Speakman

    2011-11-01

    Full Text Available The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled.

  11. On the Cut-off Point for Combinatorial Group Testing

    DEFF Research Database (Denmark)

    Fischer, Paul; Klasner, N.; Wegener, I.

    1999-01-01

    is answered by 1 if Q contains at least one essential object and by 0 otherwise. In the statistical setting the objects are essential, independently of each other, with a given probability p combinatorial setting the number k ... group testing is equal to p* = 12(3 - 5), i.e., the strategy of testing each object individually minimizes the average number of queries iff p >= p* or n = 1. In the combinatorial setting the worst case number of queries is of interest. It has been conjectured that the cut-off point of combinatorial...

  12. Permitted and forbidden sets in symmetric threshold-linear networks.

    Science.gov (United States)

    Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques

    2003-03-01

    The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.

  13. A point cloud based pipeline for depth reconstruction from autostereoscopic sets

    Science.gov (United States)

    Niquin, Cédric; Prévost, Stéphanie; Remion, Yannick

    2010-02-01

    This is a three step pipeline to construct a 3D mesh of a scene from a set of N images, destined to be viewed on auto-stereoscopic displays. The first step matches the pixels to create a point cloud using a new algorithm based on graph-cuts. It exploits the data redundancy of the N images to ensure the geometric consistency of the scene and to reduce the graph complexity, in order to speed up the computation. It performs an accurate detection of occlusions and its results can then be used in applications like view synthesis. The second step slightly moves the points along the Z-axis to refine the point cloud. It uses a new cost including both occlusion positions and light variations deduced from the matching. The Z values are selected using a dynamic programming algorithm. This step finally generates a point cloud, which is fine enough for applications like augmented reality. From any of the two previously defined point clouds, the last step creates a colored mesh, which is a convenient data structure to be used in graphics APIs. It also generates N depth maps, allowing a comparison between the results of our method with those of other methods.

  14. Unbounded dynamics and compact invariant sets of one Hamiltonian system defined by the minimally coupled field

    Energy Technology Data Exchange (ETDEWEB)

    Starkov, Konstantin E., E-mail: kstarkov@ipn.mx

    2015-06-12

    In this paper we study some features of global dynamics for one Hamiltonian system arisen in cosmology which is formed by the minimally coupled field; this system was introduced by Maciejewski et al. in 2007. We establish that under some simple conditions imposed on parameters of this system all trajectories are unbounded in both of time directions. Further, we present other conditions for system parameters under which we localize the domain with unbounded dynamics; this domain is defined with help of bounds for values of the Hamiltonian level surface parameter. We describe the case when our system possesses periodic orbits which are found explicitly. In the rest of the cases we get some localization bounds for compact invariant sets. - Highlights: • Domain with unbounded dynamics is localized. • Equations for periodic orbits are given in one level set. • Localizations for compact invariant sets are got.

  15. Fermat's point from five perspectives

    Science.gov (United States)

    Park, Jungeun; Flores, Alfinio

    2015-04-01

    The Fermat point of a triangle is the point such that minimizes the sum of the distances from that point to the three vertices. Five approaches to study the Fermat point of a triangle are presented in this article. First, students use a mechanical device using masses, strings and pulleys to study the Fermat point as the one that minimizes the potential energy of the system. Second, students use soap films between parallel planes connecting three pegs. The tension on the film will be minimal when the sum of distances is minimal. Third, students use an empirical approach, measuring distances in an interactive GeoGebra page. Fourth, students use Euclidean geometry arguments for two proofs based on the Torricelli configuration, and one using Viviani's Theorem. And fifth, the kinematic method is used to gain additional insight on the size of the angles between the segments joining the Fermat point with the vertices.

  16. Patient set-up verification by infrared optical localization and body surface sensing in breast radiation therapy

    International Nuclear Information System (INIS)

    Spadea, Maria Francesca; Baroni, Guido; Riboldi, Marco; Orecchia, Roberto; Pedotti, Antonio; Tagaste, Barbara; Garibaldi, Cristina

    2006-01-01

    Background and purpose: The aim of the study was to investigate the clinical application of a technique for patient set-up verification in breast cancer radiotherapy, based on the 3D localization of a hybrid configuration of surface control points. Materials and methods: An infrared optical tracker provided the 3D position of two passive markers and 10 laser spots placed around and within the irradiation field on nine patients. A fast iterative constrained minimization procedure was applied to detect and compensate patient set-up errors, through the control points registration with reference data coming from treatment plan (markers reference position, CT-based surface model). Results: The application of the corrective spatial transformation estimated by the registration procedure led to significant improvement of patient set-up. Median value of 3D errors affecting three additional verification markers within the irradiation field decreased from 5.7 to 3.5 mm. Errors variability (25-75%) decreased from 3.2 to 2.1 mm. Laser spots registration on the reference surface model was documented to contribute substantially to set-up errors compensation. Conclusions: Patient set-up verification through a hybrid set of control points and constrained surface minimization algorithm was confirmed to be feasible in clinical practice and to provide valuable information for the improvement of the quality of patient set-up, with minimal requirement of operator-dependant procedures. The technique combines conveniently the advantages of passive markers based methods and surface registration techniques, by featuring immediate and robust estimation of the set-up accuracy from a redundant dataset

  17. A comparison of Landsat point and rectangular field training sets for land-use classification

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    Rectangular training fields of homogeneous spectroreflectance are commonly used in supervised pattern recognition efforts. Trial image classification with manually selected training sets gives irregular and misleading results due to statistical bias. A self-verifying, grid-sampled training point approach is proposed as a more statistically valid feature extraction technique. A systematic pixel sampling network of every ninth row and ninth column efficiently replaced the full image scene with smaller statistical vectors which preserved the necessary characteristics for classification. The composite second- and third-order average classification accuracy of 50.1 percent for 331,776 pixels in the full image substantially agreed with the 51 percent value predicted by the grid-sampled, 4,100-point training set.

  18. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    Science.gov (United States)

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  19. Using point-set compression to classify folk songs

    DEFF Research Database (Denmark)

    Meredith, David

    2014-01-01

    -neighbour algorithm and leave-one-out cross-validation to classify the 360 melodies into tune families. The classifications produced by the algorithms were compared with a ground-truth classification prepared by expert musicologists. Twelve of the thirteen compressors used in the experiment were based...... compared. The highest classification success rate of 77–84% was achieved by COSIATEC, followed by 60–64% for Forth’s algorithm and then 52–58% for SIATECCompress. When the NCDs were calculated using bzip2, the success rate was only 12.5%. The results demonstrate that the effectiveness of NCD for measuring...... similarity between folk-songs for classification purposes is highly dependent upon the actual compressor chosen. Furthermore, it seems that compressors based on finding maximal repeated patterns in point-set representations of music show more promise for NCD-based music classification than general...

  20. R Implementation of a Polyhedral Approximation to a 3D Set of Points Using the ?-Shape

    Directory of Open Access Journals (Sweden)

    Thomas Lafarge

    2014-01-01

    Full Text Available This work presents the implementation in R of the ?-shape of a finite set of points in the three-dimensional space R3. This geometric structure generalizes the convex hull and allows to recover the shape of non-convex and even non-connected sets in 3D, given a ran- dom sample of points taken into it. Besides the computation of the ?-shape, the R package alphashape3d provides users with tools to facilitate the three-dimensional graphical visu- alization of the estimated set as well as the computation of important characteristics such as the connected components or the volume, among others.

  1. Responsiveness and minimal clinically important change

    DEFF Research Database (Denmark)

    Christiansen, David Høyrup; Frost, Poul; Falla, Deborah

    2015-01-01

    Study Design A prospective cohort study nested in a randomized controlled trial. Objectives To determine and compare responsiveness and minimal clinically important change of the modified Constant score (CS) and the Oxford Shoulder Score (OSS). Background The OSS and the CS are commonly used...... to assess shoulder outcomes. However, few studies have evaluated the measurement properties of the OSS and CS in terms of responsiveness and minimal clinically important change. Methods The study included 126 patients who reported having difficulty returning to usual activities 8 to 12 weeks after...... were observed for the CS and the OSS. Minimal clinically important change ROC values were 6 points for the OSS and 11 points for the CS, with upper 95% cutoff limits of 12 and 22 points, respectively. Conclusion The CS and the OSS were both suitable for assessing improvement after decompression surgery....

  2. Does gastric bypass surgery change body weight set point?

    Science.gov (United States)

    Hao, Z; Mumphrey, M B; Morrison, C D; Münzberg, H; Ye, J; Berthoud, H R

    2016-12-01

    The relatively stable body weight during adulthood is attributed to a homeostatic regulatory mechanism residing in the brain which uses feedback from the body to control energy intake and expenditure. This mechanism guarantees that if perturbed up or down by design, body weight will return to pre-perturbation levels, defined as the defended level or set point. The fact that weight re-gain is common after dieting suggests that obese subjects defend a higher level of body weight. Thus, the set point for body weight is flexible and likely determined by the complex interaction of genetic, epigenetic and environmental factors. Unlike dieting, bariatric surgery does a much better job in producing sustained suppression of food intake and body weight, and an intensive search for the underlying mechanisms has started. Although one explanation for this lasting effect of particularly Roux-en-Y gastric bypass surgery (RYGB) is simple physical restriction due to the invasive surgery, a more exciting explanation is that the surgery physiologically reprograms the body weight defense mechanism. In this non-systematic review, we present behavioral evidence from our own and other studies that defended body weight is lowered after RYGB and sleeve gastrectomy. After these surgeries, rodents return to their preferred lower body weight if over- or underfed for a period of time, and the ability to drastically increase food intake during the anabolic phase strongly argues against the physical restriction hypothesis. However, the underlying mechanisms remain obscure. Although the mechanism involves central leptin and melanocortin signaling pathways, other peripheral signals such as gut hormones and their neural effector pathways likely contribute. Future research using both targeted and non-targeted 'omics' techniques in both humans and rodents as well as modern, genetically targeted, neuronal manipulation techniques in rodents will be necessary.

  3. Approximation of a Common Element of the Fixed Point Sets of Multivalued Strictly Pseudocontractive-Type Mappings and the Set of Solutions of an Equilibrium Problem in Hilbert Spaces

    Directory of Open Access Journals (Sweden)

    F. O. Isiogugu

    2016-01-01

    Full Text Available The strong convergence of a hybrid algorithm to a common element of the fixed point sets of multivalued strictly pseudocontractive-type mappings and the set of solutions of an equilibrium problem in Hilbert spaces is obtained using a strict fixed point set condition. The obtained results improve, complement, and extend the results on multivalued and single-valued mappings in the contemporary literature.

  4. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems

    Science.gov (United States)

    Junge, Oliver; Kevrekidis, Ioannis G.

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  5. Point process analyses of variations in smoking rate by setting, mood, gender, and dependence

    Science.gov (United States)

    Shiffman, Saul; Rathbun, Stephen L.

    2010-01-01

    The immediate emotional and situational antecedents of ad libitum smoking are still not well understood. We re-analyzed data from Ecological Momentary Assessment using novel point-process analyses, to assess how craving, mood, and social setting influence smoking rate, as well as assessing the moderating effects of gender and nicotine dependence. 304 smokers recorded craving, mood, and social setting using electronic diaries when smoking and at random nonsmoking times over 16 days of smoking. Point-process analysis, which makes use of the known random sampling scheme for momentary variables, examined main effects of setting and interactions with gender and dependence. Increased craving was associated with higher rates of smoking, particularly among women. Negative affect was not associated with smoking rate, even in interaction with arousal, but restlessness was associated with substantially higher smoking rates. Women's smoking tended to be less affected by negative affect. Nicotine dependence had little moderating effect on situational influences. Smoking rates were higher when smokers were alone or with others smoking, and smoking restrictions reduced smoking rates. However, the presence of others smoking undermined the effects of restrictions. The more sensitive point-process analyses confirmed earlier findings, including the surprising conclusion that negative affect by itself was not related to smoking rates. Contrary to hypothesis, men's and not women's smoking was influenced by negative affect. Both smoking restrictions and the presence of others who are not smoking suppress smoking, but others’ smoking undermines the effects of restrictions. Point-process analyses of EMA data can bring out even small influences on smoking rate. PMID:21480683

  6. A Defense of Semantic Minimalism

    Science.gov (United States)

    Kim, Su

    2012-01-01

    Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

  7. An energy-saving set-point optimizer with a sliding mode controller for automotive air-conditioning/refrigeration systems

    International Nuclear Information System (INIS)

    Huang, Yanjun; Khajepour, Amir; Ding, Haitao; Bagheri, Farshid; Bahrami, Majid

    2017-01-01

    Highlights: • A novel two-layer energy-saving controller for automotive A/C-R system is developed. • A set-point optimizer at the outer loop is designed based on the steady state model. • A sliding mode controller in the inner loop is built. • Extensively experiments studies show that about 9% energy can be saving by this controller. - Abstract: This paper presents an energy-saving controller for automotive air-conditioning/refrigeration (A/C-R) systems. With their extensive application in homes, industry, and vehicles, A/C-R systems are consuming considerable amounts of energy. The proposed controller consists of two different time-scale layers. The outer or the slow time-scale layer called a set-point optimizer is used to find the set points related to energy efficiency by using the steady state model; whereas, the inner or the fast time-scale layer is used to track the obtained set points. In the inner loop, thanks to its robustness, a sliding mode controller (SMC) is utilized to track the set point of the cargo temperature. The currently used on/off controller is presented and employed as a basis for comparison to the proposed controller. More importantly, the real experimental results under several disturbed scenarios are analysed to demonstrate how the proposed controller can improve performance while reducing the energy consumption by 9% comparing with the on/off controller. The controller is suitable for any type of A/C-R system even though it is applied to an automotive A/C-R system in this paper.

  8. A criterion for flatness in minimal area metrics that define string diagrams

    International Nuclear Information System (INIS)

    Ranganathan, K.; Massachusetts Inst. of Tech., Cambridge, MA

    1992-01-01

    It has been proposed that the string diagrams of closed string field theory be defined by a minimal area problem that requires that all nontrivial homotopy curves have length greater than or equal to 2π. Consistency requires that the minimal area metric be flat in a neighbourhood of the punctures. The theorem proven in this paper, yields a criterion which if satisfied, will ensure this requirement. The theorem states roughly that the metric is flat in an open set, U if there is a unique closed curve of length 2π through every point in U and all of these closed curves are in the same free homotopy class. (orig.)

  9. Minimal models for axion and neutrino

    Directory of Open Access Journals (Sweden)

    Y.H. Ahn

    2016-01-01

    Full Text Available The PQ mechanism resolving the strong CP problem and the seesaw mechanism explaining the smallness of neutrino masses may be related in a way that the PQ symmetry breaking scale and the seesaw scale arise from a common origin. Depending on how the PQ symmetry and the seesaw mechanism are realized, one has different predictions on the color and electromagnetic anomalies which could be tested in the future axion dark matter search experiments. Motivated by this, we construct various PQ seesaw models which are minimally extended from the (non- supersymmetric Standard Model and thus set up different benchmark points on the axion–photon–photon coupling in comparison with the standard KSVZ and DFSZ models.

  10. Non-rigid point set registration of curves: registration of the superficial vessel centerlines of the brain

    Science.gov (United States)

    Marreiros, Filipe M. M.; Wang, Chunliang; Rossitti, Sandro; Smedby, Örjan

    2016-03-01

    In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points. For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added. The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.

  11. [The first exploration of a minimally invasive lysis subcutaneouly for the treatment of gluteal muscle contracture based on relatively safe region around standard injection point of gluteal muscle].

    Science.gov (United States)

    Xiao, Ying; Tang, Zhi-hong; Zhang, Si-rong; Zou, Guo-yao; Xiao, Rong-chi; Liu, Rui-duan; Hu, Jun-zu

    2011-06-01

    To explore the solution of choosing the minimally invasive incision site for gluteal muscle contracture patient based on standard injection point of gluteal muscle. from September 2008 to August 2010, 25 patients (14 males and 11 females with an average of 16.5 years, ranging from 12 to 26 years) with injected gluteal muscle contracture were prospectively studied. The course of disease was from 6 to 12 years. Firstly, the connective skin Surface line from anterior superior iliac spine to coccyx (line AD) was delineated and the point (point O) was marked out as the standard gluteal muscle injection site which was on the one-third of the distance from the anterior superior iliac spine(point A) to the coccyx (point D). Secondly, the anterior and posterior edge lines of surface projection of the gluteal muscle contracture banding (line a, line p) were delineated. Thirdly, the distance from B to O and C to O (B is the point of intersection of line a and line AD,C is the point of intersection of line P and line AD)were measured which was the intersection of line a,p and line AD to point O. Lastly, the minimally invasive surgery was operformed via the skin entry of point C. OB = (0 +/- 0.76) cm, OC = (2.86 +/- 0.78) cm, BC = (2.86 +/- 1.01) cm,the mean postoperative drainage was less than 10 ml,there was no nerve damage,hematoma and other complications. All patients achieved the function of squatting in 4 to 6 days. The solution of choosing the minimally invasive incision site based on standard injection point of gluteal muscle has advantages of positioning precisely,handling easily, recoverying quickly, less trauma and safety, etc.

  12. Embeddings of graphs into Euclidean space under which the number of points that belong to a hyperplane is minimal

    Energy Technology Data Exchange (ETDEWEB)

    Oblakov, Konstantin I; Oblakova, Tat' yana A [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2012-10-31

    The paper is devoted to the characteristic of a graph that is the minimal (over all embeddings of the graph into a space of given dimension) number of points that belong to the same hyperplane. Upper and lower estimates for this number are given that linearly depend on the dimension of the space. For trees a more precise upper estimate is obtained, which asymptotically coincides with the lower one for large dimension of the space. Bibliography: 9 titles.

  13. Specialized minimal PDFs for optimized LHC calculations

    NARCIS (Netherlands)

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-01-01

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct

  14. Optimal Load-Tracking Operation of Grid-Connected Solid Oxide Fuel Cells through Set Point Scheduling and Combined L1-MPC Control

    Directory of Open Access Journals (Sweden)

    Siwei Han

    2018-03-01

    Full Text Available An optimal load-tracking operation strategy for a grid-connected tubular solid oxide fuel cell (SOFC is studied based on the steady-state analysis of the system thermodynamics and electrochemistry. Control of the SOFC is achieved by a two-level hierarchical control system. In the upper level, optimal setpoints of output voltage and the current corresponding to unit load demand is obtained through a nonlinear optimization by minimizing the SOFC’s internal power waste. In the lower level, a combined L1-MPC control strategy is designed to achieve fast set point tracking under system nonlinearities, while maintaining a constant fuel utilization factor. To prevent fuel starvation during the transient state resulting from the output power surging, a fuel flow constraint is imposed on the MPC with direct electron balance calculation. The proposed control schemes are testified on the grid-connected SOFC model.

  15. Minimizers with discontinuous velocities for the electromagnetic variational method

    International Nuclear Information System (INIS)

    De Luca, Jayme

    2010-01-01

    The electromagnetic two-body problem has neutral differential delay equations of motion that, for generic boundary data, can have solutions with discontinuous derivatives. If one wants to use these neutral differential delay equations with arbitrary boundary data, solutions with discontinuous derivatives must be expected and allowed. Surprisingly, Wheeler-Feynman electrodynamics has a boundary value variational method for which minimizer trajectories with discontinuous derivatives are also expected, as we show here. The variational method defines continuous trajectories with piecewise defined velocities and accelerations, and electromagnetic fields defined by the Euler-Lagrange equations on trajectory points. Here we use the piecewise defined minimizers with the Lienard-Wierchert formulas to define generalized electromagnetic fields almost everywhere (but on sets of points of zero measure where the advanced/retarded velocities and/or accelerations are discontinuous). Along with this generalization we formulate the generalized absorber hypothesis that the far fields vanish asymptotically almost everywhere and show that localized orbits with far fields vanishing almost everywhere must have discontinuous velocities on sewing chains of breaking points. We give the general solution for localized orbits with vanishing far fields by solving a (linear) neutral differential delay equation for these far fields. We discuss the physics of orbits with discontinuous derivatives stressing the differences to the variational methods of classical mechanics and the existence of a spinorial four-current associated with the generalized variational electrodynamics.

  16. Point source reconstruction principle of linear inverse problems

    International Nuclear Information System (INIS)

    Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu

    2010-01-01

    Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space

  17. NP-hardness of the cluster minimization problem revisited

    Science.gov (United States)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  18. NP-hardness of the cluster minimization problem revisited

    International Nuclear Information System (INIS)

    Adib, Artur B

    2005-01-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested

  19. NP-hardness of the cluster minimization problem revisited

    Energy Technology Data Exchange (ETDEWEB)

    Adib, Artur B [Physics Department, Brown University, Providence, RI 02912 (United States)

    2005-10-07

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  20. Fitter. The package for fitting a chosen theoretical multi-parameter function through a set of data points. Application to experimental data of the YuMO spectrometer. Version 2.1.0. Long write-up and user's guide

    International Nuclear Information System (INIS)

    Solov'ev, A.G.; Stadnik, A.V.; Islamov, A.N.; Kuklin, A.I.

    2008-01-01

    Fitter is a C++ program aimed to fit a chosen theoretical multi-parameter function through a set of data points. The method of fitting is chi-square minimization. Moreover, the robust fitting method can be applied to Fitter. Fitter was designed to be used for a small-angle neutron scattering data analysis. Respective theoretical models are implemented in it. Some commonly used models (Gaussian and polynomials) are also implemented for wider applicability

  1. Thermodynamic-behaviour model for air-cooled screw chillers with a variable set-point condensing temperature

    International Nuclear Information System (INIS)

    Chan, K.T.; Yu, F.W.

    2006-01-01

    This paper presents a thermodynamic model to evaluate the coefficient of performance (COP) of an air-cooled screw chiller under various operating conditions. The model accounts for the real process phenomena, including the capacity control of screw compressors and variations in the heat-transfer coefficients of an evaporator and a condenser at part load. It also contains an algorithm to determine how the condenser fans are staged in response to a set-point condensing temperature. The model parameters are identified, based on the performance data of chiller specifications. The chiller model is validated using a wide range of operating data of an air-cooled screw chiller. The difference between the measured and modelled COPs is within ±10% for 86% of the data points. The chiller's COP can increase by up to 115% when the set-point condensing temperature is adjusted, based on any given outdoor temperature. Having identified the variation in the chiller's COP, a suitable strategy is proposed for air-cooled screw chillers to operate at maximum efficiency as much as possible when they have to satisfy a building's cooling-load

  2. Protection set-points lines for the reactor core and considerations about power distribution and peak factors

    International Nuclear Information System (INIS)

    Furieri, E.B.

    1981-01-01

    In order to assure the reactor core integrity during the slow operational transients (power excursion above the nominal value and the high coolant temperature), the formation of a steam film (DNB-Departure from Nucleate Boiling) in the control rods must be avoided. The protection set points lines presents the points where DNBR (relation between critical heat flux-q sub(DNB) and the local heat flux-q' sub(local) is equal to 1.30, corrected by peak factors and uncertainty in function of ΔTr and T sub(R), respectively coolant elevation and medium coolant temperature in reactor pressure vessel. The curve set-points were determined using a new version of COBRA-IIIF (CUPRO) computer code, implemented with new subroutines and linearized convergence scheme. Pratical results for Angra-1 core were obtained and its were compared with the results from the fabricator. (E.G.) [pt

  3. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    Science.gov (United States)

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  4. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    International Nuclear Information System (INIS)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin

    2016-01-01

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.

  5. Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-01-01

    Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.

  6. Minimal string theory is logarithmic

    International Nuclear Information System (INIS)

    Ishimoto, Yukitaka; Yamaguchi, Shun-ichi

    2005-01-01

    We study the simplest examples of minimal string theory whose worldsheet description is the unitary (p,q) minimal model coupled to two-dimensional gravity ( Liouville field theory). In the Liouville sector, we show that four-point correlation functions of 'tachyons' exhibit logarithmic singularities, and that the theory turns out to be logarithmic. The relation with Zamolodchikov's logarithmic degenerate fields is also discussed. Our result holds for generic values of (p,q)

  7. Non-minimal Wu-Yang monopole

    International Nuclear Information System (INIS)

    Balakin, A.B.; Zayats, A.E.

    2007-01-01

    We discuss new exact spherically symmetric static solutions to non-minimally extended Einstein-Yang-Mills equations. The obtained solution to the Yang-Mills subsystem is interpreted as a non-minimal Wu-Yang monopole solution. We focus on the analysis of two classes of the exact solutions to the gravitational field equations. Solutions of the first class belong to the Reissner-Nordstroem type, i.e., they are characterized by horizons and by the singularity at the point of origin. The solutions of the second class are regular ones. The horizons and singularities of a new type, the non-minimal ones, are indicated

  8. Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    Sakuma, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  9. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-03-27

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

  10. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  11. Amorphous topological insulators constructed from random point sets

    Science.gov (United States)

    Mitchell, Noah P.; Nash, Lisa M.; Hexner, Daniel; Turner, Ari M.; Irvine, William T. M.

    2018-04-01

    The discovery that the band structure of electronic insulators may be topologically non-trivial has revealed distinct phases of electronic matter with novel properties1,2. Recently, mechanical lattices have been found to have similarly rich structure in their phononic excitations3,4, giving rise to protected unidirectional edge modes5-7. In all of these cases, however, as well as in other topological metamaterials3,8, the underlying structure was finely tuned, be it through periodicity, quasi-periodicity or isostaticity. Here we show that amorphous Chern insulators can be readily constructed from arbitrary underlying structures, including hyperuniform, jammed, quasi-crystalline and uniformly random point sets. While our findings apply to mechanical and electronic systems alike, we focus on networks of interacting gyroscopes as a model system. Local decorations control the topology of the vibrational spectrum, endowing amorphous structures with protected edge modes—with a chirality of choice. Using a real-space generalization of the Chern number, we investigate the topology of our structures numerically, analytically and experimentally. The robustness of our approach enables the topological design and self-assembly of non-crystalline topological metamaterials on the micro and macro scale.

  12. A New Iterative Method for Equilibrium Problems and Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Abdul Latif

    2013-01-01

    Full Text Available Introducing a new iterative method, we study the existence of a common element of the set of solutions of equilibrium problems for a family of monotone, Lipschitz-type continuous mappings and the sets of fixed points of two nonexpansive semigroups in a real Hilbert space. We establish strong convergence theorems of the new iterative method for the solution of the variational inequality problem which is the optimality condition for the minimization problem. Our results improve and generalize the corresponding recent results of Anh (2012, Cianciaruso et al. (2010, and many others.

  13. On the reflection point where light reflects to a known destination on quadratic surfaces.

    Science.gov (United States)

    Gonçalves, Nuno

    2010-01-15

    We address the problem of determining the reflection point on a specular surface where a light ray that travels from a source to a target is reflected. The specular surfaces considered are those expressed by a quadratic equation. So far, there is no closed form explicit equation for the general solution of this determination of the reflection point, and the usual approach is to use the Snell law or the Fermat principle whose equations are derived in multidimensional nonlinear minimizations. We prove in this Letter that one can impose a set of three restrictions to the reflection point that can impose a set of three restrictions that culminates in a very elegant formalism of searching the reflection point in a unidimensional curve in space. This curve is the intersection of two quadratic equations. Some applications of this framework are also discussed.

  14. Determination of the protection set-points lines for the Angra-1 reactor core

    International Nuclear Information System (INIS)

    Furieri, E.B.

    1980-03-01

    In this work several thermo-hidraulic calculation were performed to obtain Protection set-points lines for the Angra-1 reactor core in order to compare with the values presented by the vendor in the FSAR. These lines are the locus of points where DNBR min = 1,3 and power = 1,18 x P nominal as a function of ΔT m and T m , the temperature difference and the average coolant temperature between hot and cold legs. A computation scheme was developed using COBRA-IIIF as a subroutine of a new main program and adding new subroutines in order to obtain the desired DNBR. The solution is obtained through a convergentce procedure using parameters estimated in a sensivity study. (author) [pt

  15. ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS

    Directory of Open Access Journals (Sweden)

    Dietrich Stoyan

    2011-05-01

    Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.

  16. Design of a Novel Low Cost Point of Care Tampon (POCkeT) Colposcope for Use in Resource Limited Settings

    Science.gov (United States)

    Lam, Christopher T.; Krieger, Marlee S.; Gallagher, Jennifer E.; Asma, Betsy; Muasher, Lisa C.; Schmitt, John W.; Ramanujam, Nimmi

    2015-01-01

    Introduction Current guidelines by WHO for cervical cancer screening in low- and middle-income countries involves visual inspection with acetic acid (VIA) of the cervix, followed by treatment during the same visit or a subsequent visit with cryotherapy if a suspicious lesion is found. Implementation of these guidelines is hampered by a lack of: trained health workers, reliable technology, and access to screening facilities. A low cost ultra-portable Point of Care Tampon based digital colposcope (POCkeT Colposcope) for use at the community level setting, which has the unique form factor of a tampon, can be inserted into the vagina to capture images of the cervix, which are on par with that of a state of the art colposcope, at a fraction of the cost. A repository of images to be compiled that can be used to empower front line workers to become more effective through virtual dynamic training. By task shifting to the community setting, this technology could potentially provide significantly greater cervical screening access to where the most vulnerable women live. The POCkeT Colposcope’s concentric LED ring provides comparable white and green field illumination at a fraction of the electrical power required in commercial colposcopes. Evaluation with standard optical imaging targets to assess the POCkeT Colposcope against the state of the art digital colposcope and other VIAM technologies. Results Our POCkeT Colposcope has comparable resolving power, color reproduction accuracy, minimal lens distortion, and illumination when compared to commercially available colposcopes. In vitro and pilot in vivo imaging results are promising with our POCkeT Colposcope capturing comparable quality images to commercial systems. Conclusion The POCkeT Colposcope is capable of capturing images suitable for cervical lesion analysis. Our portable low cost system could potentially increase access to cervical cancer screening in limited resource settings through task shifting to community

  17. Common Fixed Points of Mappings and Set-Valued Mappings in Symmetric Spaces with Application to Probabilistic Spaces

    OpenAIRE

    M. Aamri; A. Bassou; S. Bennani; D. El Moutawakil

    2007-01-01

    The main purpose of this paper is to give some common fixed point theorems of mappings and set-valued mappings of a symmetric space with some applications to probabilistic spaces. In order to get these results, we define the concept of E-weak compatibility between set-valued and single-valued mappings of a symmetric space.

  18. Multiple blocking sets in PG(n,q), n>=3

    DEFF Research Database (Denmark)

    Barat, Janos

    2004-01-01

    This article discusses minimal s-fold blocking sets B in PG (n, q), q = ph, p prime, q > 661, n > 3, of size |B| > sq + cp q2/3 - (s - 1) (s - 2)/2 (s > min (cp q1/6, q1/4/2)). It is shown that these s-fold blocking sets contain the disjoint union of a collection of s lines and/or Baer subplanes....... To obtain these results, we extend results of Blokhuis–Storme–Szönyi on s-fold blocking sets in PG(2, q) to s-fold blocking sets having points to which a multiplicity is given. Then the results in PG(n, q), n ≥ 3, are obtained using projection arguments. The results of this article also improve results...

  19. Smartphone Use by Nurses in Acute Care Settings.

    Science.gov (United States)

    Flynn, Greir Ander Huck; Polivka, Barbara; Behr, Jodi Herron

    2018-03-01

    The use of smartphones in acute care settings remains controversial due to security concerns and personal use. The purposes of this study were to determine (1) the current rates of personal smartphone use by nurses in acute care settings, (2) nurses' preferences regarding the use of smartphone functionality at work, and (3) nurse perceptions of the benefits and drawbacks of smartphone use at work. An online survey of nurses from six acute care facilities within one healthcare system assessed the use of personal smartphones in acute care settings and perceptions of the benefits and drawbacks of smartphone use at work. Participants (N = 735) were primarily point-of-care nurses older than 31 years. Most participants (98%) used a smartphone in the acute care setting. Respondents perceived the most common useful and beneficial smartphone functions in acute care settings as allowing them to access information on medications, procedures, and diseases. Participants older than 50 years were less likely to use a smartphone in acute care settings and to agree with the benefits of smartphones. There is a critical need for recognition that smartphones are used by point-of-care nurses for a variety of functions and that realistic policies for smartphone use are needed to enhance patient care and minimize distractions.

  20. Minimal Dark Matter in the sky

    International Nuclear Information System (INIS)

    Panci, P.

    2016-01-01

    We discuss some theoretical and phenomenological aspects of the Minimal Dark Matter (MDM) model proposed in 2006, which is a theoretical framework highly appreciated for its minimality and yet its predictivity. We first critically review the theoretical requirements of MDM pointing out generalizations of this framework. Then we review the phenomenology of the originally proposed fermionic hyperchargeless electroweak quintuplet showing its main γ-ray tests.

  1. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  2. DETECTION OF SLOPE MOVEMENT BY COMPARING POINT CLOUDS CREATED BY SFM SOFTWARE

    Directory of Open Access Journals (Sweden)

    K. Oda

    2016-06-01

    Full Text Available This paper proposes movement detection method between point clouds created by SFM software, without setting any onsite georeferenced points. SfM software, like Smart3DCaputure, PhotoScan, and Pix4D, are convenient for non-professional operator of photogrammetry, because these systems require simply specification of sequence of photos and output point clouds with colour index which corresponds to the colour of original image pixel where the point is projected. SfM software can execute aerial triangulation and create dense point clouds fully automatically. This is useful when monitoring motion of unstable slopes, or loos rocks in slopes along roads or railroads. Most of existing method, however, uses mesh-based DSM for comparing point clouds before/after movement and it cannot be applied in such cases that part of slopes forms overhangs. And in some cases movement is smaller than precision of ground control points and registering two point clouds with GCP is not appropriate. Change detection method in this paper adopts CCICP (Classification and Combined ICP algorithm for registering point clouds before / after movement. The CCICP algorithm is a type of ICP (Iterative Closest Points which minimizes point-to-plane, and point-to-point distances, simultaneously, and also reject incorrect correspondences based on point classification by PCA (Principle Component Analysis. Precision test shows that CCICP method can register two point clouds up to the 1 pixel size order in original images. Ground control points set in site are useful for initial setting of two point clouds. If there are no GCPs in site of slopes, initial setting is achieved by measuring feature points as ground control points in the point clouds before movement, and creating point clouds after movement with these ground control points. When the motion is rigid transformation, in case that a loose Rock is moving in slope, motion including rotation can be analysed by executing CCICP for a

  3. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    Science.gov (United States)

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  4. Emerging technologies in point-of-care molecular diagnostics for resource-limited settings.

    Science.gov (United States)

    Peeling, Rosanna W; McNerney, Ruth

    2014-06-01

    Emerging molecular technologies to diagnose infectious diseases at the point at which care is delivered have the potential to save many lives in developing countries where access to laboratories is poor. Molecular tests are needed to improve the specificity of syndromic management, monitor progress towards disease elimination and screen for asymptomatic infections with the goal of interrupting disease transmission and preventing long-term sequelae. In simplifying laboratory-based molecular assays for use at point-of-care, there are inevitable compromises between cost, ease of use and test performance. Despite significant technological advances, many challenges remain for the development of molecular diagnostics for resource-limited settings. There needs to be more advocacy for these technologies to be applied to infectious diseases, increased efforts to lower the barriers to market entry through streamlined and harmonized regulatory approaches, faster policy development for adoption of new technologies and novel financing mechanisms to enable countries to scale up implementation.

  5. Minimally invasive approaches for the treatment of inflammatory bowel disease

    Institute of Scientific and Technical Information of China (English)

    Marco Zoccali; Alessandro Fichera

    2012-01-01

    Despite significant improvements in medical management of inflammatory bowel disease,many of these patients still require surgery at some point in the course of their disease.Their young age and poor general conditions,worsened by the aggressive medical treatments,make minimally invasive approaches particularly enticing to this patient population.However,the typical inflammatory changes that characterize these diseases have hindered wide diffusion of laparoscopy in this setting,currently mostly pursued in high-volume referral centers,despite accumulating evidences in the literature supporting the benefits of minimally invasive surgery.The largest body of evidence currently available for terminal ileal Crohn's disease shows improved short term outcomes after laparoscopic surgery,with prolonged operative times.For Crohn's colitis,high quality evidence supporting laparoscopic surgery is lacking.Encouraging preliminary results have been obtained with the adoption of laparoscopic restorative total proctocolectomy for the treatment of ulcerative colitis.A consensus about patients' selection and the need for staging has not been reached yet.Despite the lack of conclusive evidence,a wave of enthusiasm is pushing towards less invasive strategies,to further minimize surgical trauma,with single incision laparoscopic surgery being the most realistic future development.

  6. General least-squares fitting procedures to minimize the volume of a hyperellipsoid

    International Nuclear Information System (INIS)

    Wadlinger, E.A.

    1979-01-01

    Several methods for determining the shape parameters, which in two dimensions are the Courant-Snyder parameters, and the volume of an ellipse or hyperellipse that represent a set of phase-space points in a two or more dimensional hyperspace are presented. The ellipse parameters are useful for matching a beam to an accelerating or transport system and in studies of emittance growth. The fitting procedure minimizes the total volume of a hyperellipse by adjusting the ellipse shape parameters. The total volume is the sum of the individual particle volumes defined by the hyperellipse that passes through the phase-space point of a particle. A two-dimensional space is considered first; the results are then generalized to higher dimensions. Computer programs using these techniques were written. 1 figure

  7. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    Science.gov (United States)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  8. Effect of Set-point Variation on Thermal Comfort and Energy Use in a Plus-energy Dwelling

    DEFF Research Database (Denmark)

    Toftum, Jørn; Kazanci, Ongun Berk; Olesen, Bjarne W.

    2016-01-01

    When designing buildings and space conditioning systems, the occupant thermal comfort, health, and productivity are the main criteria to satisfy. However, this should be achieved with the most energy-efficient space conditioning systems (heating, cooling, and ventilation). Control strategy, set......-points, and control dead-bands have a direct effect on the thermal environment in and the energy use of a building. The thermal environment in and the energy use of a building are associated with the thermal mass of the building and the control strategy, including set-points and control dead-bands. With thermally...... active building systems (TABS), temperatures are allowed to drift within the comfort zone, while in spaces with air-conditioning, temperatures in a narrower interval typically are aimed at. This behavior of radiant systems provides certain advantages regarding energy use, since the temperatures...

  9. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  10. Specialized minimal PDFs for optimized LHC calculations

    CERN Document Server

    Carrazza, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-04-15

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically on their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top quark pair, and electroweak gauge boson physics, and determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4 and 11 Hessian eigenvectors respectively are enough to fully describe the corresp...

  11. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  12. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    Science.gov (United States)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  13. Minimization over randomly selected lines

    Directory of Open Access Journals (Sweden)

    Ismet Sahin

    2013-07-01

    Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selects randomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.

  14. Search for outlying data points in multivariate solar activity data sets

    International Nuclear Information System (INIS)

    Bartkowiak, A.; Jakimiec, M.

    1989-01-01

    The aim of this paper is the investigation of outlying data points in the solar activity data sets. Two statistical methods for identifying of multivariate outliers are presented: the chi2-plot method based on the analysis of Mahalanobis distances and the method based on principal component analysis, i.e. on scatterdiagrams constructed from the first two or last two eigenvectors. We demonstrate the usefullness of these methods applying them to same data of solar activity. The methods allow to reveal quite precisely the data vectors containing some errors and also some untypical vectors, i.e. vectors with unusually large values or with values revealing untypical relations as compared with the common relations between the appropriate variables. 12 refs., 7 figs., 8 tabs. (author)

  15. Learn with SAT to Minimize Büchi Automata

    Directory of Open Access Journals (Sweden)

    Stephan Barth

    2012-10-01

    Full Text Available We describe a minimization procedure for nondeterministic Büchi automata (NBA. For an automaton A another automaton A_min with the minimal number of states is learned with the help of a SAT-solver. This is done by successively computing automata A' that approximate A in the sense that they accept a given finite set of positive examples and reject a given finite set of negative examples. In the course of the procedure these example sets are successively increased. Thus, our method can be seen as an instance of a generic learning algorithm based on a "minimally adequate teacher'' in the sense of Angluin. We use a SAT solver to find an NBA for given sets of positive and negative examples. We use complementation via construction of deterministic parity automata to check candidates computed in this manner for equivalence with A. Failure of equivalence yields new positive or negative examples. Our method proved successful on complete samplings of small automata and of quite some examples of bigger automata. We successfully ran the minimization on over ten thousand automata with mostly up to ten states, including the complements of all possible automata with two states and alphabet size three and discuss results and runtimes; single examples had over 100 states.

  16. Protection coordination: Determination of break point set

    NARCIS (Netherlands)

    Madani, S.M.; Rijanto, H.

    1998-01-01

    Modern power system networks are often multiloop structured. The co-ordinated setting of overcurrent and distance protective relays in such networks is tedious and time consuming. The complicated part of this problem is the determination of a proper minimum set of relays, the so-called minimum

  17. Chemical bonding analysis for solid-state systems using intrinsic oriented quasiatomic minimal-basis-set orbitals

    International Nuclear Information System (INIS)

    Yao, Y.X.; Wang, C.Z.; Ho, K.M.

    2010-01-01

    A chemical bonding scheme is presented for the analysis of solid-state systems. The scheme is based on the intrinsic oriented quasiatomic minimal-basis-set orbitals (IO-QUAMBOs) previously developed by Ivanic and Ruedenberg for molecular systems. In the solid-state scheme, IO-QUAMBOs are generated by a unitary transformation of the quasiatomic orbitals located at each site of the system with the criteria of maximizing the sum of the fourth power of interatomic orbital bond order. Possible bonding and antibonding characters are indicated by the single particle matrix elements, and can be further examined by the projected density of states. We demonstrate the method by applications to graphene and (6,0) zigzag carbon nanotube. The oriented-orbital scheme automatically describes the system in terms of sp 2 hybridization. The effect of curvature on the electronic structure of the zigzag carbon nanotube is also manifested in the deformation of the intrinsic oriented orbitals as well as a breaking of symmetry leading to nonzero single particle density matrix elements. In an additional study, the analysis is performed on the Al 3 V compound. The main covalent bonding characters are identified in a straightforward way without resorting to the symmetry analysis. Our method provides a general way for chemical bonding analysis of ab initio electronic structure calculations with any type of basis sets.

  18. process setting models for the minimization of costs defectives

    African Journals Online (AJOL)

    Dr Obe

    determine the mean setting so as to minimise the total loss through under-limit complaints and loss of sales and goodwill as well as over-limit losses through excess materials and rework costs. Models are developed for the two types of setting of the mean so that the minimum costs of losses are achieved. Also, a model is ...

  19. Specialized minimal PDFs for optimized LHC calculations

    International Nuclear Information System (INIS)

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-01-01

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically as regards their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top-quark pair, and electroweak gauge boson physics, and we determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4, and 11 Hessian eigenvectors, respectively, are enough to fully describe the corresponding processes. (orig.)

  20. Observer-based design of set-point tracking adaptive controllers for nonlinear chaotic systems

    International Nuclear Information System (INIS)

    Khaki-Sedigh, A.; Yazdanpanah-Goharrizi, A.

    2006-01-01

    A gradient based approach for the design of set-point tracking adaptive controllers for nonlinear chaotic systems is presented. In this approach, Lyapunov exponents are used to select the controller gain. In the case of unknown or time varying chaotic plants, the Lyapunov exponents may vary during the plant operation. In this paper, an effective adaptive strategy is used for online identification of Lyapunov exponents and adaptive control of nonlinear chaotic plants. Also, a nonlinear observer for estimation of the states is proposed. Simulation results are provided to show the effectiveness of the proposed methodology

  1. Observer-based design of set-point tracking adaptive controllers for nonlinear chaotic systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaki-Sedigh, A. [Department of Electrical Engineering, K.N. Toosi University of Technology, Sayyed Khandan Bridge, Shariati Street, Tehran 16314 (Iran, Islamic Republic of)]. E-mail: sedigh@kntu.ac.ir; Yazdanpanah-Goharrizi, A. [Department of Electrical Engineering, K.N. Toosi University of Technology, Sayyed Khandan Bridge, Shariati Street, Tehran 16314 (Iran, Islamic Republic of)]. E-mail: yazdanpanah@ee.kntu.ac.ir

    2006-09-15

    A gradient based approach for the design of set-point tracking adaptive controllers for nonlinear chaotic systems is presented. In this approach, Lyapunov exponents are used to select the controller gain. In the case of unknown or time varying chaotic plants, the Lyapunov exponents may vary during the plant operation. In this paper, an effective adaptive strategy is used for online identification of Lyapunov exponents and adaptive control of nonlinear chaotic plants. Also, a nonlinear observer for estimation of the states is proposed. Simulation results are provided to show the effectiveness of the proposed methodology.

  2. Hybrid Iterative Scheme for Triple Hierarchical Variational Inequalities with Mixed Equilibrium, Variational Inclusion, and Minimization Constraints

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We introduce and analyze a hybrid iterative algorithm by combining Korpelevich's extragradient method, the hybrid steepest-descent method, and the averaged mapping approach to the gradient-projection algorithm. It is proven that, under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of finitely many nonexpansive mappings, the solution set of a generalized mixed equilibrium problem (GMEP, the solution set of finitely many variational inclusions, and the solution set of a convex minimization problem (CMP, which is also a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm to solving a hierarchical variational inequality problem with constraints of the GMEP, the CMP, and finitely many variational inclusions.

  3. Setting limits on supersymmetry using simplified models

    CERN Document Server

    Gutschow, C.

    2012-01-01

    Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical implications. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be re-cast in this manner into almost any theoretical framework, includ...

  4. Review of Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    S. Fukano, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  5. Generation of the covariance matrix for a set of nuclear data produced by collapsing a larger parent set through the weighted averaging of equivalent data points

    International Nuclear Information System (INIS)

    Smith, D.L.

    1987-01-01

    A method is described for generating the covariance matrix of a set of experimental nuclear data which has been collapsed in size by the averaging of equivalent data points belonging to a larger parent data set. It is assumed that the data values and covariance matrix for the parent set are provided. The collapsed set is obtained by a proper weighted-averaging procedure based on the method of least squares. It is then shown by means of the law of error propagation that the elements of the covariance matrix for the collapsed set are linear combinations of elements from the parent set covariance matrix. The coefficients appearing in these combinations are binary products of the same coefficients which appear as weighting factors in the data collapsing procedure. As an example, the procedure is applied to a collection of recently-measured integral neutron-fission cross-section ratios. (orig.)

  6. Minimization of rad waste production in NPP Dukovany

    International Nuclear Information System (INIS)

    Kulovany, J.

    2001-01-01

    A whole range of measures has been taken in the power plant in connection with the minimization of radioactive waste. It will lead to the set goals. The procedures that prevent possible endangering of the operation take precedence during introduction of the minimization measures. Further economically undemanding procedures are implemented that bring about minimization in an effective way. In accordance with the EMS principles it can be expected that the minimizing measures will be implemented also in areas where their greatest contribution will be for the environment

  7. On the isoperimetric rigidity of extrinsic minimal balls

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, V.

    2003-01-01

    We consider an m-dimensional minimal submanifold P and a metric R-sphere in the Euclidean space R-n. If the sphere has its center p on P, then it will cut out a well defined connected component of P which contains this center point. We call this connected component an extrinsic minimal R-ball of P....... The quotient of the volume of the extrinsic ball and the volume of its boundary is not larger than the corresponding quotient obtained in the space form standard situation, where the minimal submanifold is the totally geodesic linear subspace R-m. Here we show that if the minimal submanifold has dimension...... larger than 3, if P is not too curved along the boundary of an extrinsic minimal R-ball, and if the inequality alluded to above is an equality for the extrinsic minimal ball, then the minimal submanifold is totally geodesic....

  8. Distributed Submodular Minimization And Motion Coordination Over Discrete State Space

    KAUST Repository

    Jaleel, Hassan

    2017-09-21

    Submodular set-functions are extensively used in large-scale combinatorial optimization problems arising in complex networks and machine learning. While there has been significant interest in distributed formulations of convex optimization, distributed minimization of submodular functions has not received significant attention. Thus, our main contribution is a framework for minimizing submodular functions in a distributed manner. The proposed framework is based on the ideas of Lovasz extension of submodular functions and distributed optimization of convex functions. The framework exploits a fundamental property of submodularity that the Lovasz extension of a submodular function is a convex function and can be computed efficiently. Moreover, a minimizer of a submodular function can be computed by computing the minimizer of its Lovasz extension. In the proposed framework, we employ a consensus based distributed optimization algorithm to minimize set-valued submodular functions as well as general submodular functions defined over set products. We also identify distributed motion coordination in multiagent systems as a new application domain for submodular function minimization. For demonstrating key ideas of the proposed framework, we select a complex setup of the capture the flag game, which offers a variety of challenges relevant to multiagent system. We formulate the problem as a submodular minimization problem and verify through extensive simulations that the proposed framework results in feasible policies for the agents.

  9. A simplified density matrix minimization for linear scaling self-consistent field theory

    International Nuclear Information System (INIS)

    Challacombe, M.

    1999-01-01

    A simplified version of the Li, Nunes and Vanderbilt [Phys. Rev. B 47, 10891 (1993)] and Daw [Phys. Rev. B 47, 10895 (1993)] density matrix minimization is introduced that requires four fewer matrix multiplies per minimization step relative to previous formulations. The simplified method also exhibits superior convergence properties, such that the bulk of the work may be shifted to the quadratically convergent McWeeny purification, which brings the density matrix to idempotency. Both orthogonal and nonorthogonal versions are derived. The AINV algorithm of Benzi, Meyer, and Tuma [SIAM J. Sci. Comp. 17, 1135 (1996)] is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations. These methods have been developed with an atom-blocked sparse matrix algebra that achieves sustained megafloating point operations per second rates as high as 50% of theoretical, and implemented in the MondoSCF suite of linear scaling SCF programs. For the first time, linear scaling Hartree - Fock theory is demonstrated with three-dimensional systems, including water clusters and estane polymers. The nonorthogonal minimization is shown to be uncompetitive with minimization in an orthonormal representation. An early onset of linear scaling is found for both minimal and double zeta basis sets, and crossovers with a highly optimized eigensolver are achieved. Calculations with up to 6000 basis functions are reported. The scaling of errors with system size is investigated for various levels of approximation. copyright 1999 American Institute of Physics

  10. Improving the performance of minimizers and winnowing schemes.

    Science.gov (United States)

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-07-15

    The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. Flow area optimization in point to area or area to point flows

    International Nuclear Information System (INIS)

    Ghodoossi, Lotfollah; Egrican, Niluefer

    2003-01-01

    This paper deals with the constructal theory of generation of shape and structure in flow systems connecting one point to a finite size area. The flow direction may be either from the point to the area or the area to the point. The formulation of the problem remains the same if the flow direction is reversed. Two models are used in optimization of the point to area or area to point flow problem: cost minimization and revenue maximization. The cost minimization model enables one to predict the shape of the optimized flow areas, but the geometric sizes of the flow areas are not predictable. That is, as an example, if the area of flow is a rectangle with a fixed area size, optimization of the point to area or area to point flow problem by using the cost minimization model will only predict the height/length ratio of the rectangle not the height and length itself. By using the revenue maximization model in optimization of the flow problems, all optimized geometric aspects of the interested flow areas will be derived as well. The aim of this paper is to optimize the point to area or area to point flow problems in various elemental flow area shapes and various structures of the flow system (various combinations of elemental flow areas) by using the revenue maximization model. The elemental flow area shapes used in this paper are either rectangular or triangular. The forms of the flow area structure, made up of an assembly of optimized elemental flow areas to obtain bigger flow areas, are rectangle-in-rectangle, rectangle-in-triangle, triangle-in-triangle and triangle-in-rectangle. The global maximum revenue, revenue collected per unit flow area and the shape and sizes of each flow area structure have been derived in optimized conditions. The results for each flow area structure have been compared with the results of the other structures to determine the structure that provides better performance. The conclusion is that the rectangle-in-triangle flow area structure

  12. MSSM (Minimal Supersymmetric Standard Model) Dark Matter Without Prejudice

    International Nuclear Information System (INIS)

    Gainer, James S.

    2009-01-01

    Recently we examined a large number of points in a 19-dimensional parameter subspace of the CP-conserving MSSM with Minimal Flavor Violation. We determined whether each of these points satisfied existing theoretical, experimental, and observational constraints. Here we discuss the properties of the parameter space points allowed by existing data that are relevant for dark matter searches.

  13. Harm minimization among teenage drinkers

    DEFF Research Database (Denmark)

    Jørgensen, Morten Hulvej; Curtis, Tine; Christensen, Pia Haudrup

    2007-01-01

    AIM: To examine strategies of harm minimization employed by teenage drinkers. DESIGN, SETTING AND PARTICIPANTS: Two periods of ethnographic fieldwork were conducted in a rural Danish community of approximately 2000 inhabitants. The fieldwork included 50 days of participant observation among 13....... In regulating the social context of drinking they relied on their personal experiences more than on formalized knowledge about alcohol and harm, which they had learned from prevention campaigns and educational programmes. CONCLUSIONS: In this study we found that teenagers may help each other to minimize alcohol...

  14. Freeze-dried plasma at the point of injury: from concept to doctrine.

    Science.gov (United States)

    Glassberg, Elon; Nadler, Roy; Gendler, Sami; Abramovich, Amir; Spinella, Philip C; Gerhardt, Robert T; Holcomb, John B; Kreiss, Yitshak

    2013-12-01

    While early plasma transfusion for the treatment of patients with ongoing major hemorrhage is widely accepted as part of the standard of care in the hospital setting, logistic constraints have limited its use in the out-of-hospital setting. Freeze-dried plasma (FDP), which can be stored at ambient temperatures, enables early treatment in the out-of-hospital setting. Point-of-injury plasma transfusion entails several significant advantages over currently used resuscitation fluids, including the avoidance of dilutional coagulopathy, by minimizing the need for crystalloid infusion, beneficial effects on endothelial function, physiological pH level, and better maintenance of intravascular volume compared with crystalloid-based solutions. The Israel Defense Forces Medical Corps policy is that plasma is the resuscitation fluid of choice for selected, severely wounded patients and has thus included FDP as part of its armamentarium for use at the point of injury by advanced life savers, across the entire military. We describe the clinical rationale behind the use of FDP at the point-of-injury, the drafting of the administration protocol now being used by Israel Defense Forces advanced life support providers, the process of procurement and distribution, and preliminary data describing the first casualties treated with FDP at the point of injury. It is our hope that others will be able to learn from our experience, thus improving trauma casualty care around the world.

  15. Set point calculations for RAPID project

    International Nuclear Information System (INIS)

    HICKMAN, G.L.

    1999-01-01

    This change modifies accuracies of the water skid temperature indicators and controllers TIC-410. TI-412, TI-413, TIC-413, TIC-414, TIC-415. Acknowledges ability to calibrate PQIT-367 and modifies the accuracy of that instrument loop. Adjusts the allowable dilution dater temperature from 110-130F to 102-130F based on PCP Rev.2 and adjusts alarm and other points to reflect that change. Removes revision numbers for all references. Numerous additional changes (fixing typos, more detailed explanations etc.) throughout

  16. Reprogramming the body weight set point by a reciprocal interaction of hypothalamic leptin sensitivity and Pomc gene expression reverts extreme obesity

    Directory of Open Access Journals (Sweden)

    Kavaljit H. Chhabra

    2016-10-01

    Conclusions: Pomc reactivation in previously obese, calorie-restricted ArcPomc−/− mice normalized energy homeostasis, suggesting that their body weight set point was restored to control levels. In contrast, massively obese and hyperleptinemic ArcPomc−/− mice or those weight-matched and treated with PASylated leptin to maintain extreme hyperleptinemia prior to Pomc reactivation converged to an intermediate set point relative to lean control and obese ArcPomc−/− mice. We conclude that restoration of hypothalamic leptin sensitivity and Pomc expression is necessary for obese ArcPomc−/− mice to achieve and sustain normal metabolic homeostasis; whereas deficits in either parameter set a maladaptive allostatic balance that defends increased adiposity and body weight.

  17. Hoelder continuity of energy minimizer maps between Riemannian polyhedra

    International Nuclear Information System (INIS)

    Bouziane, Taoufik

    2004-10-01

    The goal of the present paper is to establish some kind of regularity of an energy minimizer map between Riemannian polyhedra. More precisely, we will show the Hoelder continuity of local energy minimizers between Riemannian polyhedra with the target spaces without focal points. With this new result, we also complete our existence theorem obtained elsewhere, and consequently we generalize completely, to the case of target polyhedra without focal points (which is a weaker geometric condition than the nonpositivity of the curvature), the Eells-Fuglede's existence and regularity theorem which is the new version of the famous Eells-Sampson's theorem. (author)

  18. Test-retest reliability and minimal detectable change of two simplified 3-point balance measures in patients with stroke.

    Science.gov (United States)

    Chen, Yi-Miau; Huang, Yi-Jing; Huang, Chien-Yu; Lin, Gong-Hong; Liaw, Lih-Jiun; Lee, Shih-Chieh; Hsieh, Ching-Lin

    2017-10-01

    The 3-point Berg Balance Scale (BBS-3P) and 3-point Postural Assessment Scale for Stroke Patients (PASS-3P) were simplified from the BBS and PASS to overcome the complex scoring systems. The BBS-3P and PASS-3P were more feasible in busy clinical practice and showed similarly sound validity and responsiveness to the original measures. However, the reliability of the BBS-3P and PASS-3P is unknown limiting their utility and the interpretability of scores. We aimed to examine the test-retest reliability and minimal detectable change (MDC) of the BBS-3P and PASS-3P in patients with stroke. Cross-sectional study. The rehabilitation departments of a medical center and a community hospital. A total of 51 chronic stroke patients (64.7% male). Both balance measures were administered twice 7 days apart. The test-retest reliability of both the BBS-3P and PASS-3P were examined by intraclass correlation coefficients (ICC). The MDC and its percentage over the total score (MDC%) of each measure was calculated for examining the random measurement errors. The ICC values of the BBS-3P and PASS-3P were 0.99 and 0.97, respectively. The MDC% (MDC) of the BBS-3P and PASS-3P were 9.1% (5.1 points) and 8.4% (3.0 points), respectively, indicating that both measures had small and acceptable random measurement errors. Our results showed that both the BBS-3P and the PASS-3P had good test-retest reliability, with small and acceptable random measurement error. These two simplified 3-level balance measures can provide reliable results over time. Our findings support the repeated administration of the BBS-3P and PASS-3P to monitor the balance of patients with stroke. The MDC values can help clinicians and researchers interpret the change scores more precisely.

  19. Method to Minimize the Low-Frequency Neutral-Point Voltage Oscillations With Time-Offset Injection for Neutral-Point-Clamped Inverters

    DEFF Research Database (Denmark)

    Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum

    2015-01-01

    time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...

  20. May 2002 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from Dana Point to Point La...

  1. September 2002 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from Dana Point to Point La...

  2. Stages of the recognition and roentgenological semiotics of minimal peripheric lung cancer

    International Nuclear Information System (INIS)

    Lindenbraten, L.D.

    1987-01-01

    The system of diagnosis of peripheral cancer should be aimed at its detection at stage TI m , i.e. at the detection of a tumor whose shadow on a radiogram 70x70 mm was within 0.5-1.5 cm, and on a plain chest X-ray it was within. Fluorographic and roentgenographic semiotics of minimal peripheral cancer was considered on 40 cases. it was pointed out that the diagnosis of early stages of tumor development could be made only by improving the organizational basis of mass screening by setting up consultative cancer pulmonological commissions. Physicians should be aware of minimum changes in the pulmonary tissue

  3. A Comparative Study of Applying Active-Set and Interior Point Methods in MPC for Controlling Nonlinear pH Process

    Directory of Open Access Journals (Sweden)

    Syam Syafiie

    2014-06-01

    Full Text Available A comparative study of Model Predictive Control (MPC using active-set method and interior point methods is proposed as a control technique for highly non-linear pH process. The process is a strong acid-strong base system. A strong acid of hydrochloric acid (HCl and a strong base of sodium hydroxide (NaOH with the presence of buffer solution sodium bicarbonate (NaHCO3 are used in a neutralization process flowing into reactor. The non-linear pH neutralization model governed in this process is presented by multi-linear models. Performance of both controllers is studied by evaluating its ability of set-point tracking and disturbance-rejection. Besides, the optimization time is compared between these two methods; both MPC shows the similar performance with no overshoot, offset, and oscillation. However, the conventional active-set method gives a shorter control action time for small scale optimization problem compared to MPC using IPM method for pH control.

  4. Subject-specific cardiovascular system model-based identification and diagnosis of septic shock with a minimally invasive data set: animal experiments and proof of concept

    International Nuclear Information System (INIS)

    Geoffrey Chase, J; Starfinger, Christina; Hann, Christopher E; Lambermont, Bernard; Ghuysen, Alexandre; Kolh, Philippe; Dauby, Pierre C; Desaive, Thomas; Shaw, Geoffrey M

    2011-01-01

    A cardiovascular system (CVS) model and parameter identification method have previously been validated for identifying different cardiac and circulatory dysfunctions in simulation and using porcine models of pulmonary embolism, hypovolemia with PEEP titrations and induced endotoxic shock. However, these studies required both left and right heart catheters to collect the data required for subject-specific monitoring and diagnosis—a maximally invasive data set in a critical care setting although it does occur in practice. Hence, use of this model-based diagnostic would require significant additional invasive sensors for some subjects, which is unacceptable in some, if not all, cases. The main goal of this study is to prove the concept of using only measurements from one side of the heart (right) in a 'minimal' data set to identify an effective patient-specific model that can capture key clinical trends in endotoxic shock. This research extends existing methods to a reduced and minimal data set requiring only a single catheter and reducing the risk of infection and other complications—a very common, typical situation in critical care patients, particularly after cardiac surgery. The extended methods and assumptions that found it are developed and presented in a case study for the patient-specific parameter identification of pig-specific parameters in an animal model of induced endotoxic shock. This case study is used to define the impact of this minimal data set on the quality and accuracy of the model application for monitoring, detecting and diagnosing septic shock. Six anesthetized healthy pigs weighing 20–30 kg received a 0.5 mg kg −1 endotoxin infusion over a period of 30 min from T0 to T30. For this research, only right heart measurements were obtained. Errors for the identified model are within 8% when the model is identified from data, re-simulated and then compared to the experimentally measured data, including measurements not used in the

  5. Real-time estimation of FLE for point-based registration

    Science.gov (United States)

    Wiles, Andrew D.; Peters, Terry M.

    2009-02-01

    In image-guide surgery, optimizing the accuracy in localizing the surgical tools within the virtual reality environment or 3D image is vitally important, significant effort has been spent reducing the measurement errors at the point of interest or target. This target registration error (TRE) is often defined by a root-mean-square statistic which reduces the vector data to a single term that can be minimized. However, lost in the data reduction is the directionality of the error which, can be modelled using a 3D covariance matrix. Recently, we developed a set of expressions that modeled the TRE statistics for point-based registrations as a function of the fiducial marker geometry, target location and the fiducial localizer error (FLE). Unfortunately, these expressions are only as good as the definition of the FLE. In order to close the gap, we have subsequently developed a closed form expression that estimates the FLE as a function of the estimated fiducial registration error (FRE, the error between the measured fiducials and the best fit locations of those fiducials). The FRE covariance matrix is estimated using a sliding window technique and used as input into the closed form expression to estimate the FLE. The estimated FLE can then used to estimate the TRE which, can be given to the surgeon to permit the procedure to be designed such that the errors associated with the point-based registrations are minimized.

  6. Cross-site comparisons of concentration-discharge relationships reveal climate-driven chemostatic set points

    Science.gov (United States)

    Godsey, S.; Kirchner, J. W.

    2017-12-01

    Streamflow solute concentrations often vary predictably with flows, providing insight into processes controlling solute generation and export. Previous work by the authors showed that log-transformed concentration-discharge relationships of weathering-derived solutes in 59 headwater catchments had relatively low slopes, implying that these watersheds behaved almost like chemostats. That is, their rates of solute production and/or mobilization were nearly proportional to water fluxes, on both event and inter-annual time scales. Here we re-examine these findings using data from roughly 1000 catchments, ranging from ˜10 to >1,000,000 sq. km in drainage area, and spanning a wide range of lithologic and climatic settings.Concentration-discharge relationships among this much larger set of much larger catchments are broadly consistent with the chemostatic behavior described above. However, site-to-site variations in mean concentrations among these catchments are negatively correlated with long-term average precipitation and discharge, suggesting dilution of stream concentrations under long-term leaching of the critical zone. Thus, on event and inter-annual time scales, stream solute concentrations are chemostatically buffered by groundwater storage and fast chemical reactions (such as ion exchange), but on much longer time scales, the catchment's chemostatic "set point" is determined by climatically driven critical zone evolution. We present examples illustrating short-term and long-term controls on water quality consistent with variations in weather and climate, and discuss their implications.

  7. One-dimensional Gromov minimal filling problem

    International Nuclear Information System (INIS)

    Ivanov, Alexandr O; Tuzhilin, Alexey A

    2012-01-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  8. Minimal genera of open 4-manifolds

    OpenAIRE

    Gompf, Robert E.

    2013-01-01

    We study exotic smoothings of open 4-manifolds using the minimal genus function and its analog for end homology. While traditional techniques in open 4-manifold smoothing theory give no control of minimal genera, we make progress by using the adjunction inequality for Stein surfaces. Smoothings can be constructed with much more control of these genus functions than the compact setting seems to allow. As an application, we expand the range of 4-manifolds known to have exotic smoothings (up to ...

  9. Neural set point for the control of arterial pressure: role of the nucleus tractus solitarius

    Directory of Open Access Journals (Sweden)

    Valentinuzzi Max E

    2010-01-01

    Full Text Available Abstract Background Physiological experiments have shown that the mean arterial blood pressure (MAP can not be regulated after chemo and cardiopulmonary receptor denervation. Neuro-physiological information suggests that the nucleus tractus solitarius (NTS is the only structure that receives information from its rostral neural nuclei and from the cardiovascular receptors and projects to nuclei that regulate the circulatory variables. Methods From a control theory perspective, to answer if the cardiovascular regulation has a set point, we should find out whether in the cardiovascular control there is something equivalent to a comparator evaluating the error signal (between the rostral projections to the NTS and the feedback inputs. The NTS would function as a comparator if: a its lesion suppresses cardiovascular regulation; b the negative feedback loop still responds normally to perturbations (such as mechanical or electrical after cutting the rostral afferent fibers to the NTS; c perturbation of rostral neural structures (RNS to the NTS modifies the set point without changing the dynamics of the elicited response; and d cardiovascular responses to perturbations on neural structures within the negative feedback loop compensate for much faster than perturbations on the NTS rostral structures. Results From the control theory framework, experimental evidence found currently in the literature plus experimental results from our group was put together showing that the above-mentioned conditions (to show that the NTS functions as a comparator are satisfied. Conclusions Physiological experiments suggest that long-term blood pressure is regulated by the nervous system. The NTS functions as a comparator (evaluating the error signal between its RNS and the cardiovascular receptor afferents and projects to nuclei that regulate the circulatory variables. The mean arterial pressure (MAP is regulated by the feedback of chemo and cardiopulmonary receptors and

  10. Fuzzy Logic Based Set-Point Weighting Controller Tuning for an Internal Model Control Based PID Controller

    Directory of Open Access Journals (Sweden)

    Maruthai Suresh

    2009-10-01

    Full Text Available Controller tuning is the process of adjusting the parameters of the selected controller to achieve optimum response for the controlled process. For many of the control problems, a satisfactory performance is obtained by using PID controllers. One of the main problems with mathematical models of physical systems is that the parameters used in the models cannot be determined with absolute accuracy. The values of the parameters may change with time or various effects. In these cases, conventional controller tuning methods suffer when trying a lot to produce optimum response. In order to overcome these difficulties a fuzzy logic based Set- Point weighting controller tuning method is proposed. The effectiveness of the proposed scheme is analyzed through computer simulation using SIMULINK software and the results are presented. The fuzzy logic based simulation results are compared with Cohen-Coon (CC, Ziegler- Nichols (ZN, Ziegler – Nichols with Set- Point weighting (ZN-SPW, Internal Model Control (IMC and Internal model based PID controller responses (IMC-PID. The effects of process modeling errors and the importance of controller tuning have been brought out using the proposed control scheme.

  11. Waste minimization of a process fluid through effective control under various controllers tuning

    International Nuclear Information System (INIS)

    Younas, M.; Gul, S.; Naveed, S.

    2005-01-01

    Whenever a process is disturbed either by servo system or regulatory system, the control action is applied to trace the desired point. An efficient controller setting should be selected in order to get speedy response under the pattern or constraints of quality of the product. The effective control action is desired to utilize the maximum of raw material and to minimize the waste. This is a critical problem in cases where the raw material or product is valuable and costly, e.g. pharmaceuticals. This problem has been addressed in this work on a laboratory scale plant. The plant consists of feed tank, pumps, plate and frame heat exchanger and hot water re-circulator tank. The system responses were logged with computer while the controller was tuned with Ziegler-Nichols (Z-N) and Cohen-Coon (C-C) tunings. A detailed study indicates that Ziegler-Nichols Controller tunings is better than Cohen-Coon as waste production was minimized. (author)

  12. Atmospheric bromoform at Cape Point, South Africa: an initial fixed-point data set on the African continent

    Directory of Open Access Journals (Sweden)

    B. Kuyper

    2018-04-01

    Full Text Available Bromoform mixing ratios in marine air were measured at Cape Point Global Atmospheric Watch Station, South Africa. This represents the first such bromoform data set recorded at this location. Manual daily measurements were made during a month-long field campaign (austral spring 2011 using a gas chromatograph-electron capture detector (GC-ECD with a custom-built front end thermal desorption trap. The measured concentrations ranged between 4.4 and 64.6 (± 22.2 % ppt with a mean of 24.8 ± 14.8 ppt. The highest mixing ratios recorded here occurred at, or shortly after, low tide. The diurnal cycle exhibited a morning and evening maximum with lower concentrations throughout the rest of the day. Initial analysis of the data presented indicates that the local kelp beds were the dominant source of the bromoform reported. A concentration-weighted trajectory analysis of the bromoform measurements suggests that two offshore source areas may exist. These source areas appear to be centred on the Agulhas retroflection and extend from St Helena Bay to the southwest.

  13. Atmospheric bromoform at Cape Point, South Africa: an initial fixed-point data set on the African continent

    Science.gov (United States)

    Kuyper, Brett; Palmer, Carl J.; Labuschagne, Casper; Reason, Chris J. C.

    2018-04-01

    Bromoform mixing ratios in marine air were measured at Cape Point Global Atmospheric Watch Station, South Africa. This represents the first such bromoform data set recorded at this location. Manual daily measurements were made during a month-long field campaign (austral spring 2011) using a gas chromatograph-electron capture detector (GC-ECD) with a custom-built front end thermal desorption trap. The measured concentrations ranged between 4.4 and 64.6 (± 22.2 %) ppt with a mean of 24.8 ± 14.8 ppt. The highest mixing ratios recorded here occurred at, or shortly after, low tide. The diurnal cycle exhibited a morning and evening maximum with lower concentrations throughout the rest of the day. Initial analysis of the data presented indicates that the local kelp beds were the dominant source of the bromoform reported. A concentration-weighted trajectory analysis of the bromoform measurements suggests that two offshore source areas may exist. These source areas appear to be centred on the Agulhas retroflection and extend from St Helena Bay to the southwest.

  14. Effects of equilibrium point displacement in limit cycle oscillation amplitude, critical frequency and prediction of critical input angular velocity in minimal brake system

    Science.gov (United States)

    Ganji, Hamed Faghanpour; Ganji, Davood Domiri

    2017-04-01

    In the present paper, brake squeal phenomenon as a noise resource in automobiles was studied. In most cases, the modeling work is carried out assuming that deformations were small; thus, equilibrium point is set zero and linearization is performed at this point. However, the equilibrium point under certain circumstances is not zero; therefore, huge errors in prediction of brake squeal may occur. In this work, large motion domains with respect to linearization importance were subjected to investigation. Nonlinear equations of motion were considered and behavior of system for COF's model was analyzed by studying amplitude and frequency of limited cycle oscillation.

  15. Stabilization of a locally minimal forest

    Science.gov (United States)

    Ivanov, A. O.; Mel'nikova, A. E.; Tuzhilin, A. A.

    2014-03-01

    The method of partial stabilization of locally minimal networks, which was invented by Ivanov and Tuzhilin to construct examples of shortest trees with given topology, is developed. According to this method, boundary vertices of degree 2 are not added to all edges of the original locally minimal tree, but only to some of them. The problem of partial stabilization of locally minimal trees in a finite-dimensional Euclidean space is solved completely in the paper, that is, without any restrictions imposed on the number of edges remaining free of subdivision. A criterion for the realizability of such stabilization is established. In addition, the general problem of searching for the shortest forest connecting a finite family of boundary compact sets in an arbitrary metric space is formalized; it is shown that such forests exist for any family of compact sets if and only if for any finite subset of the ambient space there exists a shortest tree connecting it. The theory developed here allows us to establish further generalizations of the stabilization theorem both for arbitrary metric spaces and for metric spaces with some special properties. Bibliography: 10 titles.

  16. Acquiring minimally invasive surgical skills

    NARCIS (Netherlands)

    Hiemstra, Ellen

    2012-01-01

    Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room.

  17. Learning Agent for a Heat-Pump Thermostat with a Set-Back Strategy Using Model-Free Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Frederik Ruelens

    2015-08-01

    Full Text Available The conventional control paradigm for a heat pump with a less efficient auxiliary heating element is to keep its temperature set point constant during the day. This constant temperature set point ensures that the heat pump operates in its more efficient heat-pump mode and minimizes the risk of activating the less efficient auxiliary heating element. As an alternative to a constant set-point strategy, this paper proposes a learning agent for a thermostat with a set-back strategy. This set-back strategy relaxes the set-point temperature during convenient moments, e.g., when the occupants are not at home. Finding an optimal set-back strategy requires solving a sequential decision-making process under uncertainty, which presents two challenges. The first challenge is that for most residential buildings, a description of the thermal characteristics of the building is unavailable and challenging to obtain. The second challenge is that the relevant information on the state, i.e., the building envelope, cannot be measured by the learning agent. In order to overcome these two challenges, our paper proposes an auto-encoder coupled with a batch reinforcement learning technique. The proposed approach is validated for two building types with different thermal characteristics for heating in the winter and cooling in the summer. The simulation results indicate that the proposed learning agent can reduce the energy consumption by 4%–9% during 100 winter days and by 9%–11% during 80 summer days compared to the conventional constant set-point strategy.

  18. Development of a waste minimization plan for a Department of Energy remedial action program: Ideas for minimizing waste in remediation scenarios

    International Nuclear Information System (INIS)

    Hubbard, Linda M.; Galen, Glen R.

    1992-01-01

    Waste minimization has become an important consideration in the management of hazardous waste because of regulatory as well as cost considerations. Waste minimization techniques are often process specific or industry specific and generally are not applicable to site remediation activities. This paper will examine ways in which waste can be minimized in a remediation setting such as the U.S. Department of Energy's Formerly Utilized Sites Remedial Action Program, where the bulk of the waste produced results from remediating existing contamination, not from generating new waste. (author)

  19. Geometry of minimal rational curves on Fano manifolds

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, J -M [Korea Institute for Advanced Study, Seoul (Korea, Republic of)

    2001-12-15

    This lecture is an introduction to my joint project with N. Mok where we develop a geometric theory of Fano manifolds of Picard number 1 by studying the collection of tangent directions of minimal rational curves through a generic point. After a sketch of some historical background, the fundamental object of this project, the variety of minimal rational tangents, is defined and various examples are examined. Then some results on the variety of minimal rational tangents are discussed including an extension theorem for holomorphic maps preserving the geometric structure. Some applications of this theory to the stability of the tangent bundles and the rigidity of generically finite morphisms are given. (author)

  20. On the convergence of nonconvex minimization methods for image recovery.

    Science.gov (United States)

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  1. An updated global grid point surface air temperature anomaly data set: 1851--1990

    Energy Technology Data Exchange (ETDEWEB)

    Sepanski, R.J.; Boden, T.A.; Daniels, R.C.

    1991-10-01

    This document presents land-based monthly surface air temperature anomalies (departures from a 1951--1970 reference period mean) on a 5{degree} latitude by 10{degree} longitude global grid. Monthly surface air temperature anomalies (departures from a 1957--1975 reference period mean) for the Antarctic (grid points from 65{degree}S to 85{degree}S) are presented in a similar way as a separate data set. The data were derived primarily from the World Weather Records and the archives of the United Kingdom Meteorological Office. This long-term record of temperature anomalies may be used in studies addressing possible greenhouse-gas-induced climate changes. To date, the data have been employed in generating regional, hemispheric, and global time series for determining whether recent (i.e., post-1900) warming trends have taken place. This document also presents the monthly mean temperature records for the individual stations that were used to generate the set of gridded anomalies. The periods of record vary by station. Northern Hemisphere station data have been corrected for inhomogeneities, while Southern Hemisphere data are presented in uncorrected form. 14 refs., 11 figs., 10 tabs.

  2. Geometric Measure Theory and Minimal Surfaces

    CERN Document Server

    Bombieri, Enrico

    2011-01-01

    W.K. ALLARD: On the first variation of area and generalized mean curvature.- F.J. ALMGREN Jr.: Geometric measure theory and elliptic variational problems.- E. GIUSTI: Minimal surfaces with obstacles.- J. GUCKENHEIMER: Singularities in soap-bubble-like and soap-film-like surfaces.- D. KINDERLEHRER: The analyticity of the coincidence set in variational inequalities.- M. MIRANDA: Boundaries of Caciopoli sets in the calculus of variations.- L. PICCININI: De Giorgi's measure and thin obstacles.

  3. Minimizing Banking Risk in a Lévy Process Setting

    Directory of Open Access Journals (Sweden)

    F. Gideon

    2007-01-01

    Full Text Available The primary functions of a bank are to obtain funds through deposits from external sources and to use the said funds to issue loans. Moreover, risk management practices related to the withdrawal of these bank deposits have always been of considerable interest. In this spirit, we construct Lévy process-driven models of banking reserves in order to address the problem of hedging deposit withdrawals from such institutions by means of reserves. Here reserves are related to outstanding debt and acts as a proxy for the assets held by the bank. The aforementioned modeling enables us to formulate a stochastic optimal control problem related to the minimization of reserve, depository, and intrinsic risk that are associated with the reserve process, the net cash flows from depository activity, and cumulative costs of the bank's provisioning strategy, respectively. A discussion of the main risk management issues arising from the optimization problem mentioned earlier forms an integral part of our paper. This includes the presentation of a numerical example involving a simulation of the provisions made for deposit withdrawals via treasuries and reserves.

  4. Firefly algorithm based solution to minimize the real power loss in a power system

    Directory of Open Access Journals (Sweden)

    P. Balachennaiah

    2018-03-01

    Full Text Available This paper proposes a method to minimize the real power loss (RPL of a power system transmission network using a new meta-heuristic algorithm known as firefly algorithm (FA by optimizing the control variables such as transformer taps, UPFC location and UPFC series injected voltage magnitude and phase angle. A software program is developed in MATLAB environment for FA to minimize the RPL by optimizing (i only the transformer tap values, (ii only UPFC location and its variables with optimized tap values and (iii UPFC location and its variables along with transformer tap setting values simultaneously. Interior point successive linear programming (IPSLP technique and real coded genetic algorithm (RCGA are considered here to compare the results and to show the efficiency and superiority of the proposed FA towards the optimization of RPL. Also in this paper, bacteria foraging algorithm (BFA is adopted to validate the results of the proposed algorithm.

  5. Method to minimize the low-frequency neutral-point voltage oscillations with time-offset injection for neutral-point-clamped inverters

    DEFF Research Database (Denmark)

    Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede

    2013-01-01

    This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwell...

  6. Zero-point length from string fluctuations

    International Nuclear Information System (INIS)

    Fontanini, Michele; Spallucci, Euro; Padmanabhan, T.

    2006-01-01

    One of the leading candidates for quantum gravity, viz. string theory, has the following features incorporated in it. (i) The full spacetime is higher-dimensional, with (possibly) compact extra-dimensions; (ii) there is a natural minimal length below which the concept of continuum spacetime needs to be modified by some deeper concept. On the other hand, the existence of a minimal length (zero-point length) in four-dimensional spacetime, with obvious implications as UV regulator, has been often conjectured as a natural aftermath of any correct quantum theory of gravity. We show that one can incorporate the apparently unrelated pieces of information-zero-point length, extra-dimensions, string T-duality-in a consistent framework. This is done in terms of a modified Kaluza-Klein theory that interpolates between (high-energy) string theory and (low-energy) quantum field theory. In this model, the zero-point length in four dimensions is a 'virtual memory' of the length scale of compact extra-dimensions. Such a scale turns out to be determined by T-duality inherited from the underlying fundamental string theory. From a low energy perspective short distance infinities are cutoff by a minimal length which is proportional to the square root of the string slope, i.e., α ' . Thus, we bridge the gap between the string theory domain and the low energy arena of point-particle quantum field theory

  7. Computer program to fit a hyperellipse to a set of phase-space points in as many as six dimensions

    International Nuclear Information System (INIS)

    Wadlinger, E.A.

    1980-03-01

    A computer program that will fit a hyperellipse to a set of phase-space points in as many as 6 dimensions was written and tested. The weight assigned to the phase-space points can be varied as a function of their distance from the centroid of the distribution. Varying the weight enables determination of whether there is a difference in ellipse orientation between inner and outer particles. This program should be useful in studying the effects of longitudinal and transverse phase-space couplings

  8. Banach spaces that realize minimal fillings

    International Nuclear Information System (INIS)

    Bednov, B. B.; Borodin, P. A.

    2014-01-01

    It is proved that a real Banach space realizes minimal fillings for all its finite subsets (a shortest network spanning a fixed finite subset always exists and has the minimum possible length) if and only if it is a predual of L 1 . The spaces L 1 are characterized in terms of Steiner points (medians). Bibliography: 25 titles. (paper)

  9. An information geometric approach to least squares minimization

    Science.gov (United States)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  10. Iterative closest normal point for 3D face recognition.

    Science.gov (United States)

    Mohammadzade, Hoda; Hatzinakos, Dimitrios

    2013-02-01

    The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database.

  11. International urodynamic basic spinal cord injury data set.

    Science.gov (United States)

    Biering-Sørensen, F; Craggs, M; Kennelly, M; Schick, E; Wyndaele, J-J

    2008-07-01

    To create the International Urodynamic Basic Spinal Cord Injury (SCI) Data Set within the framework of the International SCI Data Sets. International working group. The draft of the data set was developed by a working group consisting of members appointed by the Neurourology Committee of the International Continence Society, the European Association of Urology, the American Spinal Injury Association (ASIA), the International Spinal Cord Society (ISCoS) and a representative of the Executive Committee of the International SCI Standards and Data Sets. The final version of the data set was developed after review and comments by members of the Executive Committee of the International SCI Standards and Data Sets, the ISCoS Scientific Committee, ASIA Board, relevant and interested (international) organizations and societies (around 40) and persons and the ISCoS Council. Endorsement of the data set by relevant organizations and societies will be obtained. To make the data set uniform, each variable and each response category within each variable have been specifically defined in a way that is designed to promote the collection and reporting of comparable minimal data. Variables included in the International Urodynamic Basic SCI Data Set are date of data collection, bladder sensation during filling cystometry, detrusor function, compliance during filing cystometry, function during voiding, detrusor leak point pressure, maximum detrusor pressure, cystometric bladder capacity and post-void residual volume.

  12. Barriers to Point-of-Care Testing in India: Results from Qualitative Research across Different Settings, Users and Major Diseases

    Science.gov (United States)

    Engel, Nora; Ganesh, Gayatri; Patil, Mamata; Yellappa, Vijayashree; Pant Pai, Nitika; Vadnais, Caroline; Pai, Madhukar

    2015-01-01

    Background Successful point-of-care testing, namely ensuring the completion of the test and treat cycle in the same encounter, has immense potential to reduce diagnostic and treatment delays, and impact patient outcomes. However, having rapid tests is not enough, as many barriers may prevent their successful implementation in point-of-care testing programs. Qualitative research on diagnostic practices may help identify such barriers across different points of care in health systems. Methods In this exploratory qualitative study, we conducted 78 semi-structured interviews and 13 focus group discussions in an urban and rural area of Karnataka, India, with healthcare providers (doctors, nurses, specialists, traditional healers, and informal providers), patients, community health workers, test manufacturers, laboratory technicians, program managers and policy-makers. Participants were purposively sampled to represent settings of hospitals, peripheral labs, clinics, communities and homes, in both the public and private sectors. Results In the Indian context, the onus is on the patient to ensure successful point-of-care testing across homes, clinics, labs and hospitals, amidst uncoordinated providers with divergent and often competing practices, in settings lacking material, money and human resources. We identified three overarching themes affecting point-of-care testing: the main theme is ‘relationships’ among providers and between providers and patients, influenced by the cross-cutting theme of ‘infrastructure’. Challenges with both result in ‘modified practices’ often favouring empirical (symptomatic) treatment over treatment guided by testing. Conclusions Even if tests can be conducted on the spot and infrastructure challenges have been resolved, relationships among providers and between patients and providers are crucial for successful point-of-care testing. Furthermore, these barriers do not act in isolation, but are interlinked and need to be examined

  13. Barriers to Point-of-Care Testing in India: Results from Qualitative Research across Different Settings, Users and Major Diseases.

    Directory of Open Access Journals (Sweden)

    Nora Engel

    Full Text Available Successful point-of-care testing, namely ensuring the completion of the test and treat cycle in the same encounter, has immense potential to reduce diagnostic and treatment delays, and impact patient outcomes. However, having rapid tests is not enough, as many barriers may prevent their successful implementation in point-of-care testing programs. Qualitative research on diagnostic practices may help identify such barriers across different points of care in health systems.In this exploratory qualitative study, we conducted 78 semi-structured interviews and 13 focus group discussions in an urban and rural area of Karnataka, India, with healthcare providers (doctors, nurses, specialists, traditional healers, and informal providers, patients, community health workers, test manufacturers, laboratory technicians, program managers and policy-makers. Participants were purposively sampled to represent settings of hospitals, peripheral labs, clinics, communities and homes, in both the public and private sectors.In the Indian context, the onus is on the patient to ensure successful point-of-care testing across homes, clinics, labs and hospitals, amidst uncoordinated providers with divergent and often competing practices, in settings lacking material, money and human resources. We identified three overarching themes affecting point-of-care testing: the main theme is 'relationships' among providers and between providers and patients, influenced by the cross-cutting theme of 'infrastructure'. Challenges with both result in 'modified practices' often favouring empirical (symptomatic treatment over treatment guided by testing.Even if tests can be conducted on the spot and infrastructure challenges have been resolved, relationships among providers and between patients and providers are crucial for successful point-of-care testing. Furthermore, these barriers do not act in isolation, but are interlinked and need to be examined as such. Also, a test alone has only

  14. Minimal Representations and Reductive Dual Pairs in Conformal Field Theory

    International Nuclear Information System (INIS)

    Todorov, Ivan

    2010-01-01

    A minimal representation of a simple non-compact Lie group is obtained by 'quantizing' the minimal nilpotent coadjoint orbit of its Lie algebra. It provides context for Roger Howe's notion of a reductive dual pair encountered recently in the description of global gauge symmetry of a (4-dimensional) conformal observable algebra. We give a pedagogical introduction to these notions and point out that physicists have been using both minimal representations and dual pairs without naming them and hence stand a chance to understand their theory and to profit from it.

  15. Minimalism's Grace.

    Science.gov (United States)

    Mills, Mark

    2003-01-01

    Notes that central to the short story form are three tools of fiction: voice; point of view; and setting. Discusses examples of short stories by famous authors. Explains that the very short story has become popular with high school and college teachers as a way to pique students' interest in writing fiction and in analyzing complex longer stories…

  16. Stabilization of a locally minimal forest

    International Nuclear Information System (INIS)

    Ivanov, A O; Mel'nikova, A E; Tuzhilin, A A

    2014-01-01

    The method of partial stabilization of locally minimal networks, which was invented by Ivanov and Tuzhilin to construct examples of shortest trees with given topology, is developed. According to this method, boundary vertices of degree 2 are not added to all edges of the original locally minimal tree, but only to some of them. The problem of partial stabilization of locally minimal trees in a finite-dimensional Euclidean space is solved completely in the paper, that is, without any restrictions imposed on the number of edges remaining free of subdivision. A criterion for the realizability of such stabilization is established. In addition, the general problem of searching for the shortest forest connecting a finite family of boundary compact sets in an arbitrary metric space is formalized; it is shown that such forests exist for any family of compact sets if and only if for any finite subset of the ambient space there exists a shortest tree connecting it. The theory developed here allows us to establish further generalizations of the stabilization theorem both for arbitrary metric spaces and for metric spaces with some special properties. Bibliography: 10 titles

  17. Minimally Invasive Parathyroidectomy

    Directory of Open Access Journals (Sweden)

    Lee F. Starker

    2011-01-01

    Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.

  18. Primal Interior Point Method for Minimization of Generalized Minimax Functions

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2010-01-01

    Roč. 46, č. 4 (2010), s. 697-721 ISSN 0023-5954 R&D Projects: GA ČR GA201/09/1957 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * nonsmooth optimization * generalized minimax optimization * interior-point methods * modified Newton methods * variable metric methods * global convergence * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://dml.cz/handle/10338.dmlcz/140779

  19. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  20. The Effects of Set-Points and Dead-Bands of the HVAC System on the Energy Consumption and Occupant Thermal Comfort

    DEFF Research Database (Denmark)

    Kazanci, Ongun Berk; Olesen, Bjarne W.

    2013-01-01

    A building is a complex system where many components interact with each other therefore the control system plays a key role regarding the energy consumption and the occupant thermal comfort. This study is concerned with a detached, one-storey, single family, energy-plus house. It is equipped...... on the effects of the set-points and dead-bands of different components on the energy consumption together with the occupant thermal comfort. Evaluations are carried out with TRNSYS for Copenhagen and Madrid in order to compare climatic effects....... with a ground heat exchanger, a ground coupled heat pump, embedded pipes in the floor and in the ceiling, a ventilation system (mechanical and natural), a domestic hot water tank and photovoltaic/thermal panels on the roof. Preliminary evaluations showed that for Madrid, change of indoor set-point in cooling...

  1. Minimizing the Pacman effect

    International Nuclear Information System (INIS)

    Ritson, D.; Chou, W.

    1997-10-01

    The Pacman bunches will experience two deleterious effects: tune shift and orbit displacement. It is known that the tune shift can be compensated by arranging crossing planes 900 relative to each other at successive interaction points (lPs). This paper gives an analytical estimate of the Pacman orbit displacement for a single as well as for two crossings. For the latter, it can be minimized by using equal phase advances from one IP to another. In the LHC, this displacement is in any event small and can be neglected

  2. Minimizing tip-sample forces in jumping mode atomic force microscopy in liquid

    Energy Technology Data Exchange (ETDEWEB)

    Ortega-Esteban, A. [Departamento de Fisica de la Materia Condensada, C-3, Universidad Autonoma de Madrid, Cantoblanco, 28049 Madrid (Spain); Horcas, I. [Nanotec Electronica S.L., Centro Empresarial Euronova 3, Ronda de Poniente 12, 28760 Tres Cantos, Madrid (Spain); Hernando-Perez, M. [Departamento de Fisica de la Materia Condensada, C-3, Universidad Autonoma de Madrid, Cantoblanco, 28049 Madrid (Spain); Ares, P. [Nanotec Electronica S.L., Centro Empresarial Euronova 3, Ronda de Poniente 12, 28760 Tres Cantos, Madrid (Spain); Perez-Berna, A.J.; San Martin, C.; Carrascosa, J.L. [Centro Nacional de Biotecnologia (CNB-CSIC), Darwin 3, 28049 Madrid (Spain); Pablo, P.J. de [Departamento de Fisica de la Materia Condensada, C-3, Universidad Autonoma de Madrid, Cantoblanco, 28049 Madrid (Spain); Gomez-Herrero, J., E-mail: julio.gomez@uam.es [Departamento de Fisica de la Materia Condensada, C-3, Universidad Autonoma de Madrid, Cantoblanco, 28049 Madrid (Spain)

    2012-03-15

    Control and minimization of tip-sample interaction forces are imperative tasks to maximize the performance of atomic force microscopy. In particular, when imaging soft biological matter in liquids, the cantilever dragging force prevents identification of the tip-sample mechanical contact, resulting in deleterious interaction with the specimen. In this work we present an improved jumping mode procedure that allows detecting the tip-sample contact with high accuracy, thus minimizing the scanning forces ({approx}100 pN) during the approach cycles. To illustrate this method we report images of human adenovirus and T7 bacteriophage particles which are prone to uncontrolled modifications when using conventional jumping mode. -- Highlights: Black-Right-Pointing-Pointer Improvement in atomic force microscopy in buffer solution. Black-Right-Pointing-Pointer Peak force detection. Black-Right-Pointing-Pointer Subtracting the cantilever dragging force. Black-Right-Pointing-Pointer Forces in the 100 pN range. Black-Right-Pointing-Pointer Imaging of delicate viruses with atomic force microscopy.

  3. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  4. Discrimination of the change point in a quantum setting

    International Nuclear Information System (INIS)

    Akimoto, Daiki; Hayashi, Masahito

    2011-01-01

    In the change point problem, we determine when the observed distribution has changed to another one. We expand this problem to a quantum case where copies of an unknown pure state are being distributed. That is, we estimate when the distributed quantum pure state is changed. As the most fundamental case, we treat the problem of deciding the true change point t c between the two given candidates t 1 and t 2 . Our problem is mathematically equal to identifying a given state with one of the two unknown states when multiple copies of the states are provided. The minimum of the averaged error probability is given and the optimal positive operator-valued measure (POVM) is given to obtain it when the initial and final quantum pure states are subject to the invariant prior. We also compute the error probability for deciding the change point under the above POVM when the initial and final quantum pure states are fixed. These analytical results allow us to calculate the value in the asymptotic case.

  5. Combining different types of scale space interest points using canonical sets

    NARCIS (Netherlands)

    Kanters, F.M.W.; Denton, T.; Shokoufandeh, A.; Florack, L.M.J.; Haar Romenij, ter B.M.; Sgallari, F.; Murli, A.; Paragios, N.

    2007-01-01

    Scale space interest points capture important photometric and deep structure information of an image. The information content of such points can be made explicit using image reconstruction. In this paper we will consider the problem of combining multiple types of interest points used for image

  6. Controllers with Minimal Observation Power (Application to Timed Systems)

    DEFF Research Database (Denmark)

    Bulychev, Petr; Cassez, Franck; David, Alexandre

    2012-01-01

    We consider the problem of controller synthesis under imper- fect information in a setting where there is a set of available observable predicates equipped with a cost function. The problem that we address is the computation of a subset of predicates sufficient for control and whose cost is minimal...

  7. Diophantine and minimal but not uniquely ergodic (almost)

    International Nuclear Information System (INIS)

    Kwapisz, Jaroslaw; Mathison, Mark

    2012-01-01

    We demonstrate that minimal non-uniquely ergodic behaviour can be generated by slowing down a simple harmonic oscillator with diophantine frequency, in contrast with the known examples where the frequency is well approximable by the rationals. The slowing is effected by a singular time change that brings one phase point to rest. The time one-map of the flow has uncountably many invariant measures yet every orbit is dense, with the minor exception of the rest point

  8. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin; Guermond, Jean-Luc; Popov, Bojan

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced

  9. An existence result of energy minimizer maps between Riemannian polyhedra

    International Nuclear Information System (INIS)

    Bouziane, T.

    2004-06-01

    In this paper, we prove the existence of energy minimizers in each free homotopy class of maps between polyhedra with target space without focal points. Our proof involves a careful study of some geometric properties of Riemannian polyhedra without focal points. Among other things, we show that on the relevant polyhedra, there exists a convex supporting function. (author)

  10. Common-cause analysis using sets

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1977-12-01

    Common-cause analysis was developed at the Aerojet Nuclear Company for studying the behavior of a system that is affected by special conditions and secondary causes. Common-cause analysis is related to fault tree analysis. Common-cause candidates are minimal cut sets whose primary events are closely linked by a special condition or are susceptible to the same secondary cause. It is shown that common-cause candidates can be identified using the Set Equation Transformation System (SETS). A Boolean equation is used to establish the special conditions and secondary cause susceptibilities for each primary event in the fault tree. A transformation of variables (substituting equals for equals), executed on a minimal cut set equation, results in replacing each primary event by the right side of its special condition/secondary cause equation and leads to the identification of the common-cause candidates

  11. Minimal clinically important difference on the Motor Examination part of MDS-UPDRS.

    Science.gov (United States)

    Horváth, Krisztina; Aschermann, Zsuzsanna; Ács, Péter; Deli, Gabriella; Janszky, József; Komoly, Sámuel; Balázs, Éva; Takács, Katalin; Karádi, Kázmér; Kovács, Norbert

    2015-12-01

    Recent studies increasingly utilize the Movement Disorders Society Sponsored Unified Parkinson's Disease Rating Scale (MDS-UPDRS). However, the minimal clinically important difference (MCID) has not been fully established for MDS-UPDRS yet. To assess the MCID thresholds for MDS-UPDRS Motor Examination (Part III). 728 paired investigations of 260 patients were included. At each visit both MDS-UPDRS and Clinician-reported Global Impression-Improvement (CGI-I) scales were assessed. MDS-UPDRS Motor Examination (ME) score changes associated with CGI-I score 4 (no change) were compared with MDS-UPDRS ME score changes associated with CGI-I score 3 (minimal improvement) and CGI-I score 5 (minimal worsening). Both anchor- and distribution-based techniques were utilized to determine the magnitude of MCID. The MCID estimates for MDS-UPDRS ME were asymmetric: -3.25 points for detecting minimal, but clinically pertinent, improvement and 4.63 points for observing minimal, but clinically pertinent, worsening. MCID is the smallest change of scores that are clinically meaningful to patients. These MCID estimates may allow the judgement of a numeric change in MDS-UPDRS ME on its clinical importance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Homogeneous Field and WKB Approximation in Deformed Quantum Mechanics with Minimal Length

    Directory of Open Access Journals (Sweden)

    Jun Tao

    2015-01-01

    Full Text Available In the framework of the deformed quantum mechanics with a minimal length, we consider the motion of a nonrelativistic particle in a homogeneous external field. We find the integral representation for the physically acceptable wave function in the position representation. Using the method of steepest descent, we obtain the asymptotic expansions of the wave function at large positive and negative arguments. We then employ the leading asymptotic expressions to derive the WKB connection formula, which proceeds from classically forbidden region to classically allowed one through a turning point. By the WKB connection formula, we prove the Bohr-Sommerfeld quantization rule up to Oβ2. We also show that if the slope of the potential at a turning point is too steep, the WKB connection formula is no longer valid around the turning point. The effects of the minimal length on the classical motions are investigated using the Hamilton-Jacobi method. We also use the Bohr-Sommerfeld quantization to study statistical physics in deformed spaces with the minimal length.

  13. Minimal Reducts with Grasp

    Directory of Open Access Journals (Sweden)

    Iris Iddaly Mendez Gurrola

    2011-03-01

    Full Text Available The proper detection of patient level of dementia is important to offer the suitable treatment. The diagnosis is based on certain criteria, reflected in the clinical examinations. From these examinations emerge the limitations and the degree in which each patient is in. In order to reduce the total of limitations to be evaluated, we used the rough set theory, this theory has been applied in areas of the artificial intelligence such as decision analysis, expert systems, knowledge discovery, classification with multiple attributes. In our case this theory is applied to find the minimal limitations set or reduct that generate the same classification that considering all the limitations, to fulfill this purpose we development an algorithm GRASP (Greedy Randomized Adaptive Search Procedure.

  14. Particle production after inflation with non-minimal derivative coupling to gravity

    International Nuclear Information System (INIS)

    Ema, Yohei; Jinno, Ryusuke; Nakayama, Kazunori; Mukaida, Kyohei

    2015-01-01

    We study cosmological evolution after inflation in models with non-minimal derivative coupling to gravity. The background dynamics is solved and particle production associated with rapidly oscillating Hubble parameter is studied in detail. In addition, production of gravitons through the non-minimal derivative coupling with the inflaton is studied. We also find that the sound speed squared of the scalar perturbation oscillates between positive and negative values when the non-minimal derivative coupling dominates over the minimal kinetic term. This may lead to an instability of this model. We point out that the particle production rates are the same as those in the Einstein gravity with the minimal kinetic term, if we require the sound speed squared is positive definite

  15. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-10-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  16. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  17. Probabilistic Properties of Rectilinear Steiner Minimal Trees

    Directory of Open Access Journals (Sweden)

    V. N. Salnikov

    2015-01-01

    Full Text Available This work concerns the properties of Steiner minimal trees for the manhattan plane in the context of introducing a probability measure. This problem is important because exact algorithms to solve the Steiner problem are computationally expensive (NP-hard and the solution (especially in the case of big number of points to be connected has a diversity of practical applications. That is why the work considers a possibility to rank the possible topologies of the minimal trees with respect to a probability of their usage. For this, the known facts about the structural properties of minimal trees for selected metrics have been analyzed to see their usefulness for the problem in question. For the small amount of boundary (fixed vertices, the paper offers a way to introduce a probability measure as a corollary of proved theorem about some structural properties of the minimal trees.This work is considered to further the previous similar activity concerning a problem of searching for minimal fillings, and it is a door opener to the more general (complicated task. The stated method demonstrates the possibility to reach the final result analytically, which gives a chance of its applicability to the case of the bigger number of boundary vertices (probably, with the use of computer engineering.The introducing definition of an essential Steiner point allowed a considerable restriction of the ambiguity of initial problem solution and, at the same time, comparison of such an approach with more classical works in the field concerned. The paper also lists main barriers of classical approaches, preventing their use for the task of introducing a probability measure.In prospect, application areas of the described method are expected to be wider both in terms of system enlargement (the number of boundary vertices and in terms of other metric spaces (the Euclidean case is of especial interest. The main interest is to find the classes of topologies with significantly

  18. Estimating biological elementary flux modes that decompose a flux distribution by the minimal branching property

    DEFF Research Database (Denmark)

    Chan, Siu Hung Joshua; Solem, Christian; Jensen, Peter Ruhdal

    2014-01-01

    biologically feasible EFMs by considering their graphical properties. A previous study on the transcriptional regulation of metabolic genes found that distinct branches at a branch point metabolite usually belong to distinct metabolic pathways. This suggests an intuitive property of biologically feasible EFMs......, i.e. minimal branching. RESULTS: We developed the concept of minimal branching EFM and derived the minimal branching decomposition (MBD) to decompose flux distributions. Testing in the core Escherichia coli metabolic network indicated that MBD can distinguish branches at branch points and greatly...... knowledge, which facilitates interpretation. Comparison of the methods applied to a complex flux distribution in Lactococcus lactis similarly showed the advantages of MBD. The minimal branching EFM concept underlying MBD should be useful in other applications....

  19. PREP KITT, System Reliability by Fault Tree Analysis. PREP, Min Path Set and Min Cut Set for Fault Tree Analysis, Monte-Carlo Method. KITT, Component and System Reliability Information from Kinetic Fault Tree Theory

    International Nuclear Information System (INIS)

    Vesely, W.E.; Narum, R.E.

    1997-01-01

    1 - Description of problem or function: The PREP/KITT computer program package obtains system reliability information from a system fault tree. The PREP program finds the minimal cut sets and/or the minimal path sets of the system fault tree. (A minimal cut set is a smallest set of components such that if all the components are simultaneously failed the system is failed. A minimal path set is a smallest set of components such that if all of the components are simultaneously functioning the system is functioning.) The KITT programs determine reliability information for the components of each minimal cut or path set, for each minimal cut or path set, and for the system. Exact, time-dependent reliability information is determined for each component and for each minimal cut set or path set. For the system, reliability results are obtained by upper bound approximations or by a bracketing procedure in which various upper and lower bounds may be obtained as close to one another as desired. The KITT programs can handle independent components which are non-repairable or which have a constant repair time. Any assortment of non-repairable components and components having constant repair times can be considered. Any inhibit conditions having constant probabilities of occurrence can be handled. The failure intensity of each component is assumed to be constant with respect to time. The KITT2 program can also handle components which during different time intervals, called phases, may have different reliability properties. 2 - Method of solution: The PREP program obtains minimal cut sets by either direct deterministic testing or by an efficient Monte Carlo algorithm. The minimal path sets are obtained using the Monte Carlo algorithm. The reliability information is obtained by the KITT programs from numerical solution of the simple integral balance equations of kinetic tree theory. 3 - Restrictions on the complexity of the problem: The PREP program will obtain the minimal cut and

  20. Iterated greedy algorithms to minimize the total family flow time for job-shop scheduling with job families and sequence-dependent set-ups

    Science.gov (United States)

    Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho

    2017-10-01

    This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.

  1. On relevant boundary perturbations of unitary minimal models

    International Nuclear Information System (INIS)

    Recknagel, A.; Roggenkamp, D.; Schomerus, V.

    2000-01-01

    We consider unitary Virasoro minimal models on the disk with Cardy boundary conditions and discuss deformations by certain relevant boundary operators, analogous to tachyon condensation in string theory. Concentrating on the least relevant boundary field, we can perform a perturbative analysis of renormalization group fixed points. We find that the systems always flow towards stable fixed points which admit no further (non-trivial) relevant perturbations. The new conformal boundary conditions are in general given by superpositions of 'pure' Cardy boundary conditions

  2. Genomic determinants of sporulation in Bacilli and Clostridia: towards the minimal set of sporulation-specific genes.

    Science.gov (United States)

    Galperin, Michael Y; Mekhedov, Sergei L; Puigbo, Pere; Smirnov, Sergey; Wolf, Yuri I; Rigden, Daniel J

    2012-11-01

    Three classes of low-G+C Gram-positive bacteria (Firmicutes), Bacilli, Clostridia and Negativicutes, include numerous members that are capable of producing heat-resistant endospores. Spore-forming firmicutes include many environmentally important organisms, such as insect pathogens and cellulose-degrading industrial strains, as well as human pathogens responsible for such diseases as anthrax, botulism, gas gangrene and tetanus. In the best-studied model organism Bacillus subtilis, sporulation involves over 500 genes, many of which are conserved among other bacilli and clostridia. This work aimed to define the genomic requirements for sporulation through an analysis of the presence of sporulation genes in various firmicutes, including those with smaller genomes than B. subtilis. Cultivable spore-formers were found to have genomes larger than 2300 kb and encompass over 2150 protein-coding genes of which 60 are orthologues of genes that are apparently essential for sporulation in B. subtilis. Clostridial spore-formers lack, among others, spoIIB, sda, spoVID and safA genes and have non-orthologous displacements of spoIIQ and spoIVFA, suggesting substantial differences between bacilli and clostridia in the engulfment and spore coat formation steps. Many B. subtilis sporulation genes, particularly those encoding small acid-soluble spore proteins and spore coat proteins, were found only in the family Bacillaceae, or even in a subset of Bacillus spp. Phylogenetic profiles of sporulation genes, compiled in this work, confirm the presence of a common sporulation gene core, but also illuminate the diversity of the sporulation processes within various lineages. These profiles should help further experimental studies of uncharacterized widespread sporulation genes, which would ultimately allow delineation of the minimal set(s) of sporulation-specific genes in Bacilli and Clostridia. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.

  3. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  4. Machine scheduling to minimize weighted completion times the use of the α-point

    CERN Document Server

    Gusmeroli, Nicoló

    2018-01-01

    This work reviews the most important results regarding the use of the α-point in Scheduling Theory. It provides a number of different LP-relaxations for scheduling problems and seeks to explain their polyhedral consequences. It also explains the concept of the α-point and how the conversion algorithm works, pointing out the relations to the sum of the weighted completion times. Lastly, the book explores the latest techniques used for many scheduling problems with different constraints, such as release dates, precedences, and parallel machines. This reference book is intended for advanced undergraduate and postgraduate students who are interested in scheduling theory. It is also inspiring for researchers wanting to learn about sophisticated techniques and open problems of the field.

  5. Towards the assembly of a minimal oscillator

    NARCIS (Netherlands)

    Nourian, Z.

    2015-01-01

    Life must have started with lower degree of complexity and connectivity. This statement readily triggers the question how simple is the simplest representation of life? In different words and considering a constructive approach, what are the requirements for creating a minimal cell? This thesis sets

  6. FTA, Fault Tree Analysis for Minimal Cut Sets, Graphics for CALCOMP

    International Nuclear Information System (INIS)

    Van Slyke, W.J.; Griffing, D.E.; Diven, J.

    1978-01-01

    1 - Description of problem or function: The FTA (Fault Tree Analysis) system was designed to predict probabilities of the modes of failure for complex systems and to graphically present the structure of systems. There are three programs in the system. Program ALLCUTS performs the calculations. Program KILMER constructs a CalComp plot file of the system fault tree. Program BRANCH builds a cross-reference list of the system fault tree. 2 - Method of solution: ALLCUTS employs a top-down set expansion algorithm to find fault tree cut-sets and then optionally calculates their probability using a currently accepted cut-set quantification method. The methodology is adapted from that in WASH-1400 (draft), August 1974. 3 - Restrictions on the complexity of the problem: Maxima of: 175 basic events, 425 rate events. ALLCUTS may be expanded to solve larger problems depending on available core memory

  7. Dual-time-point Imaging and Delayed-time-point Fluorodeoxyglucose-PET/Computed Tomography Imaging in Various Clinical Settings

    DEFF Research Database (Denmark)

    Houshmand, Sina; Salavati, Ali; Antonsen Segtnan, Eivind

    2016-01-01

    The techniques of dual-time-point imaging (DTPI) and delayed-time-point imaging, which are mostly being used for distinction between inflammatory and malignant diseases, has increased the specificity of fluorodeoxyglucose (FDG)-PET for diagnosis and prognosis of certain diseases. A gradually incr...

  8. FTAP, Minimal Cut Sets of Arbitrary Fault Trees. FRTPLT, Fault Tree Structure and Logical Gates Plot for Program FTAP. FRTGEN, Fault Trees by Sub-tree Generator from Parent Tree for Program FTAP

    International Nuclear Information System (INIS)

    Willie, Randall R.; Rabien, U.

    1997-01-01

    1 - Description of problem or function: FTAP is a general-purpose program for deriving minimal reliability cut and path set families from the fault tree for a complex system. The program has a number of useful features that make it well-suited to nearly all fault tree applications. An input fault tree may specify the system state as any logical function of subsystem or component state variables or complements of these variables; thus, for instance, 'exclusive-or' type relations may be formed. When fault tree logical relations involve complements of state variables, the analyst may instruct FTAP to produce a family of prime implicants, a generalization of the minimal cut set concept. The program offers the flexibility of several distinct methods of generating cut set families. FTAP can also identify certain subsystems as system modules and provide a collection of minimal cut set families that essentially expresses the system state as a function of these module state variables. Another feature allows a useful subfamily to be obtained when the family of minimal cut sets or prime implicants is too large to be found in its entirety; this subfamily may consist of only those sets not containing more than some fixed number of elements or only those sets 'interesting' to the analyst in some special sense. Finally, the analyst can modify the input fault tree in various ways by declaring state variables identically true or false. 2 - Method of solution: Fault tree methods are based on the observation that the system state, either working or failed, can usually be expressed as a Boolean relation between states of several large, readily identifiable subsystems. The state of each subsystem in turn depends on states of simpler subsystems and components which compose it, so that the state of the system itself is determined by a hierarchy of logical relationships between states of subsystems. A fault tree is a graphical representation of these relationships. 3 - Restrictions on the

  9. Acquiring minimally invasive surgical skills

    OpenAIRE

    Hiemstra, Ellen

    2012-01-01

    Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room. This thesis has led to an enlarged insight in the organization of surgical skills training during residency training of surgical medical specialists.

  10. Prederivatives of gamma paraconvex set-valued maps and Pareto optimality conditions for set optimization problems.

    Science.gov (United States)

    Huang, Hui; Ning, Jixian

    2017-01-01

    Prederivatives play an important role in the research of set optimization problems. First, we establish several existence theorems of prederivatives for γ -paraconvex set-valued mappings in Banach spaces with [Formula: see text]. Then, in terms of prederivatives, we establish both necessary and sufficient conditions for the existence of Pareto minimal solution of set optimization problems.

  11. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  12. Fixed-Point Configurable Hardware Components

    Directory of Open Access Journals (Sweden)

    Rocher Romuald

    2006-01-01

    Full Text Available To reduce the gap between the VLSI technology capability and the designer productivity, design reuse based on IP (intellectual properties is commonly used. In terms of arithmetic accuracy, the generated architecture can generally only be configured through the input and output word lengths. In this paper, a new kind of method to optimize fixed-point arithmetic IP has been proposed. The architecture cost is minimized under accuracy constraints defined by the user. Our approach allows exploring the fixed-point search space and the algorithm-level search space to select the optimized structure and fixed-point specification. To significantly reduce the optimization and design times, analytical models are used for the fixed-point optimization process.

  13. Optimal Point-to-Point Trajectory Tracking of Redundant Manipulators using Generalized Pattern Search

    Directory of Open Access Journals (Sweden)

    Thi Rein Myo

    2008-11-01

    Full Text Available Optimal point-to-point trajectory planning for planar redundant manipulator is considered in this study. The main objective is to minimize the sum of the position error of the end-effector at each intermediate point along the trajectory so that the end-effector can track the prescribed trajectory accurately. An algorithm combining Genetic Algorithm and Pattern Search as a Generalized Pattern Search GPS is introduced to design the optimal trajectory. To verify the proposed algorithm, simulations for a 3-D-O-F planar manipulator with different end-effector trajectories have been carried out. A comparison between the Genetic Algorithm and the Generalized Pattern Search shows that The GPS gives excellent tracking performance.

  14. Automatic Optimization of Focal Point Position in CO2 Laser Welding with Neural Network in A Focus Control System

    DEFF Research Database (Denmark)

    Gong, Hui; Olsen, Flemming Ove

    CO2 lasers are increasingly being utilized for quality welding in production. Considering the high cost of equipment, the start-up time and the set-up time should be minimized. Ideally the parameters should be set up and optimized more or less automatically. In this paper a control system...... is designed and built to automatically optimize the focal point position, one of the most important parameters in CO2 laser welding, in order to perform a desired deep/full penetration welding. The control system mainly consists of a multi-axis motion controller - PMAC, a light sensor - Photo Diode, a data...

  15. Canonical Primal-Dual Method for Solving Non-convex Minimization Problems

    OpenAIRE

    Wu, Changzhi; Li, Chaojie; Gao, David Yang

    2012-01-01

    A new primal-dual algorithm is presented for solving a class of non-convex minimization problems. This algorithm is based on canonical duality theory such that the original non-convex minimization problem is first reformulated as a convex-concave saddle point optimization problem, which is then solved by a quadratically perturbed primal-dual method. %It is proved that the popular SDP method is indeed a special case of the canonical duality theory. Numerical examples are illustrated. Comparing...

  16. Modular invariance of N=2 minimal models

    International Nuclear Information System (INIS)

    Sidenius, J.

    1991-01-01

    We prove modular covariance of one-point functions at one loop in the diagonal N=2 minimal superconformal models. We use the recently derived general formalism for computing arbitrary conformal blocks in these models. Our result should be sufficient to guarantee modular covariance at arbitrary genus. It is thus an important check on the general formalism which is not manifestly modular covariant. (orig.)

  17. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  18. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    Science.gov (United States)

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  19. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  20. Using the critical incident technique to define a minimal data set for requirements elicitation in public health.

    Science.gov (United States)

    Olvingson, Christina; Hallberg, Niklas; Timpka, Toomas; Greenes, Robert A

    2002-12-18

    The introduction of computer-based information systems (ISs) in public health provides enhanced possibilities for service improvements and hence also for improvement of the population's health. Not least, new communication systems can help in the socialization and integration process needed between the different professions and geographical regions. Therefore, development of ISs that truly support public health practices require that technical, cognitive, and social issues be taken into consideration. A notable problem is to capture 'voices' of all potential users, i.e., the viewpoints of different public health practitioners. Failing to capture these voices will result in inefficient or even useless systems. The aim of this study is to develop a minimal data set for capturing users' voices on problems experienced by public health professionals in their daily work and opinions about how these problems can be solved. The issues of concern thus captured can be used both as the basis for formulating the requirements of ISs for public health professionals and to create an understanding of the use context. Further, the data can help in directing the design to the features most important for the users.

  1. Minimal string theories and integrable hierarchies

    Science.gov (United States)

    Iyer, Ramakrishnan

    Well-defined, non-perturbative formulations of the physics of string theories in specific minimal or superminimal model backgrounds can be obtained by solving matrix models in the double scaling limit. They provide us with the first examples of completely solvable string theories. Despite being relatively simple compared to higher dimensional critical string theories, they furnish non-perturbative descriptions of interesting physical phenomena such as geometrical transitions between D-branes and fluxes, tachyon condensation and holography. The physics of these theories in the minimal model backgrounds is succinctly encoded in a non-linear differential equation known as the string equation, along with an associated hierarchy of integrable partial differential equations (PDEs). The bosonic string in (2,2m-1) conformal minimal model backgrounds and the type 0A string in (2,4 m) superconformal minimal model backgrounds have the Korteweg-de Vries system, while type 0B in (2,4m) backgrounds has the Zakharov-Shabat system. The integrable PDE hierarchy governs flows between backgrounds with different m. In this thesis, we explore this interesting connection between minimal string theories and integrable hierarchies further. We uncover the remarkable role that an infinite hierarchy of non-linear differential equations plays in organizing and connecting certain minimal string theories non-perturbatively. We are able to embed the type 0A and 0B (A,A) minimal string theories into this single framework. The string theories arise as special limits of a rich system of equations underpinned by an integrable system known as the dispersive water wave hierarchy. We find that there are several other string-like limits of the system, and conjecture that some of them are type IIA and IIB (A,D) minimal string backgrounds. We explain how these and several other string-like special points arise and are connected. In some cases, the framework endows the theories with a non

  2. Environmental Restoration Progam Waste Minimization and Pollution Prevention Awareness Program Plan

    Energy Technology Data Exchange (ETDEWEB)

    Grumski, J. T.; Swindle, D. W.; Bates, L. D.; DeLozier, M. F.P.; Frye, C. E.; Mitchell, M. E.

    1991-09-30

    In response to DOE Order 5400.1 this plan outlines the requirements for a Waste Minimization and Pollution Prevention Awareness Program for the Environmental Restoration (ER) Program at Martin Marietta Energy System, Inc. Statements of the national, Department of Energy, Energy Systems, and Energy Systems ER Program policies on waste minimization are included and reflect the attitudes of these organizations and their commitment to the waste minimization effort. Organizational responsibilities for the waste minimization effort are clearly defined and discussed, and the program objectives and goals are set forth. Waste assessment is addressed as being a key element in developing the waste generation baseline. There are discussions on the scope of ER-specific waste minimization techniques and approaches to employee awareness and training. There is also a discussion on the process for continual evaluation of the Waste Minimization Program. Appendixes present an implementation schedule for the Waste Minimization and Pollution Prevention Program, the program budget, an organization chart, and the ER waste minimization policy.

  3. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    Science.gov (United States)

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  4. Environmental Restoration Progam Waste Minimization and Pollution Prevention Awareness Program Plan

    International Nuclear Information System (INIS)

    1991-01-01

    In response to DOE Order 5400.1 this plan outlines the requirements for a Waste Minimization and Pollution Prevention Awareness Program for the Environmental Restoration (ER) Program at Martin Marietta Energy System, Inc. Statements of the national, Department of Energy, Energy Systems, and Energy Systems ER Program policies on waste minimization are included and reflect the attitudes of these organizations and their commitment to the waste minimization effort. Organizational responsibilities for the waste minimization effort are clearly defined and discussed, and the program objectives and goals are set forth. Waste assessment is addressed as being a key element in developing the waste generation baseline. There are discussions on the scope of ER-specific waste minimization techniques and approaches to employee awareness and training. There is also a discussion on the process for continual evaluation of the Waste Minimization Program. Appendixes present an implementation schedule for the Waste Minimization and Pollution Prevention Program, the program budget, an organization chart, and the ER waste minimization policy

  5. A game on the universe of sets

    International Nuclear Information System (INIS)

    Saveliev, D I

    2008-01-01

    Working in set theory without the axiom of regularity, we consider a two-person game on the universe of sets. In this game, the players choose in turn an element of a given set, an element of this element and so on. A player wins if he leaves his opponent no possibility of making a move, that is, if he has chosen the empty set. Winning sets (those admitting a winning strategy for one of the players) form a natural hierarchy with levels indexed by ordinals (in the finite case, the ordinal indicates the shortest length of a winning strategy). We show that the class of hereditarily winning sets is an inner model containing all well-founded sets and that each of the four possible relations between the universe, the class of hereditarily winning sets, and the class of well-founded sets is consistent. As far as the class of winning sets is concerned, either it is equal to the whole universe, or many of the axioms of set theory cannot hold on this class. Somewhat surprisingly, this does not apply to the axiom of regularity: we show that the failure of this axiom is consistent with its relativization to winning sets. We then establish more subtle properties of winning non-well-founded sets. We describe all classes of ordinals for which the following is consistent: winning sets without minimal elements (in the sense of membership) occur exactly at the levels indexed by the ordinals of this class. In particular, we show that if an even level of the hierarchy of winning sets contains a set without minimal elements, then all higher levels contain such sets. We show that the failure of the axiom of regularity implies that all odd levels contain sets without minimal elements, but it is consistent with the absence of such sets at all even levels as well as with their appearance at an arbitrary even non-limit or countable-cofinal level. To obtain consistency results, we propose a new method for obtaining models with non-well-founded sets. Finally, we study how long this game can

  6. Optimization of the size and shape of the set-in nozzle for a PWR reactor pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Murtaza, Usman Tariq, E-mail: maniiut@yahoo.com; Javed Hyder, M., E-mail: hyder@pieas.edu.pk

    2015-04-01

    Highlights: • The size and shape of the set-in nozzle of the RPV have been optimized. • The optimized nozzle ensure the reduction of the mass around 198 kg per nozzle. • The mass of the RPV should be minimized for better fracture toughness. - Abstract: The objective of this research work is to optimize the size and shape of the set-in nozzle for a typical reactor pressure vessel (RPV) of a 300 MW pressurized water reactor. The analysis was performed by optimizing the four design variables which control the size and shape of the nozzle. These variables are inner radius of the nozzle, thickness of the nozzle, taper angle at the nozzle-cylinder intersection, and the point where taper of the nozzle starts from. It is concluded that the optimum design of the nozzle is the one that minimizes the two conflicting state variables, i.e., the stress intensity (Tresca yield criterion) and the mass of the RPV.

  7. Motivations for seeking minimally invasive cosmetic procedures in an academic outpatient setting.

    Science.gov (United States)

    Sobanko, Joseph F; Taglienti, Anthony J; Wilson, Anthony J; Sarwer, David B; Margolis, David J; Dai, Julia; Percec, Ivona

    2015-11-01

    The demand for minimally invasive cosmetic procedures has continued to rise, yet few studies have examined this patient population. This study sought to define the demographics, social characteristics, and motivations of patients seeking minimally invasive facial cosmetic procedures. A prospective, single-institution cohort study of 72 patients was conducted from 2011 through 2014 at an urban academic medical center. Patients were aged 25 through 70 years; presented for botulinum toxin or soft tissue filler injections; and completed demographic, informational, and psychometric questionnaires before treatment. Descriptive statistics were conducted using Stata statistical software. The average patient was 47.8 years old, was married, had children, was employed, possessed a college or advanced degree, and reported an above-average income. Most patients felt that the first signs of aging occurred around their eyes (74.6%), and a similar percentage expressed this area was the site most desired for rejuvenation. Almost one-third of patients experienced a "major life event" within the preceding year, nearly half had sought prior counseling from a mental health specialist, and 23.6% were being actively prescribed psychiatric medication at the time of treatment. Patients undergoing injectable aesthetic treatments in an urban outpatient academic center were mostly employed, highly educated, affluent women who believed that their procedure would positively impact their appearance. A significant minority experienced a major life event within the past year, which an astute clinician should address during the initial patient consultation. This study helps to better understand the psychosocial factors characterizing this patient population. 4 Therapeutic. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  8. Matrix factorizations, minimal models and Massey products

    International Nuclear Information System (INIS)

    Knapp, Johanna; Omer, Harun

    2006-01-01

    We present a method to compute the full non-linear deformations of matrix factorizations for ADE minimal models. This method is based on the calculation of higher products in the cohomology, called Massey products. The algorithm yields a polynomial ring whose vanishing relations encode the obstructions of the deformations of the D-branes characterized by these matrix factorizations. This coincides with the critical locus of the effective superpotential which can be computed by integrating these relations. Our results for the effective superpotential are in agreement with those obtained from solving the A-infinity relations. We point out a relation to the superpotentials of Kazama-Suzuki models. We will illustrate our findings by various examples, putting emphasis on the E 6 minimal model

  9. Multidimensional scaling for large genomic data sets

    Directory of Open Access Journals (Sweden)

    Lu Henry

    2008-04-01

    reconstructs the low dimensional representation as does the classical MDS. Its performance depends on the grouping method and the minimal number of the intersection points between groups. Feasible methods for grouping methods are suggested; each group must contain both neighboring and far apart data points. Our method can represent high dimensional large data set in a low dimensional space not only efficiently but also effectively.

  10. Hardware-accelerated Point Generation and Rendering of Point-based Impostors

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2005-01-01

    This paper presents a novel scheme for generating points from triangle models. The method is fast and lends itself well to implementation using graphics hardware. The triangle to point conversion is done by rendering the models, and the rendering may be performed procedurally or by a black box API....... I describe the technique in detail and discuss how the generated point sets can easily be used as impostors for the original triangle models used to create the points. Since the points reside solely in GPU memory, these impostors are fairly efficient. Source code is available online....

  11. A GLOBAL SOLUTION TO TOPOLOGICAL RECONSTRUCTION OF BUILDING ROOF MODELS FROM AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    J. Yan

    2016-06-01

    Full Text Available This paper presents a global solution to building roof topological reconstruction from LiDAR point clouds. Starting with segmented roof planes from building LiDAR points, a BSP (binary space partitioning algorithm is used to partition the bounding box of the building into volumetric cells, whose geometric features and their topology are simultaneously determined. To resolve the inside/outside labelling problem of cells, a global energy function considering surface visibility and spatial regularization between adjacent cells is constructed and minimized via graph cuts. As a result, the cells are labelled as either inside or outside, where the planar surfaces between the inside and outside form the reconstructed building model. Two LiDAR data sets of Yangjiang (China and Wuhan University (China are used in the study. Experimental results show that the completeness of reconstructed roof planes is 87.5%. Comparing with existing data-driven approaches, the proposed approach is global. Roof faces and edges as well as their topology can be determined at one time via minimization of an energy function. Besides, this approach is robust to partial absence of roof planes and tends to reconstruct roof models with visibility-consistent surfaces.

  12. Salty solutions: their effects on thermal set points in behavioral repertoires of albino rats.

    Science.gov (United States)

    Vitulli, W F; Aker, R; Howard, S W; Jones, W M; Kimball, M W; Quinn, J M

    1994-08-01

    Salt (sodium chloride) has been linked to increased blood pressure and a rise in core body temperature. The objective of this study was to investigate the role played by salt in altering behavioral thermoregulation in albino rats. Different doses of sodium chloride were administered (ip) prior to fixed-interval 2-min. schedules of microwave reinforcement in rats tested in a cold Skinner Box. Three Sprague-Dawley rats were conditioned to regulate their thermal environment with 5-sec. exposures of MW reinforcement in a repeated-measures reversal design. Friedman's non-parametric test showed significant differences among sodium chloride doses and physiologically normal saline. Post hoc sign tests showed that all doses of NaCl suppressed operant behavior for heat except 60 mg/kg. The hypothesis that sodium chloride lowers hypothalamic set point for heat was partially supported.

  13. Pseudo-set framing.

    Science.gov (United States)

    Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I

    2017-10-01

    Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Hinkley Point 'C' power station public inquiry: proof of evidence on design and safety

    International Nuclear Information System (INIS)

    George, B.V.

    1988-09-01

    A public inquiry has been set up to examine the planning application made by the Central Electricity Generating Board (CEGB) for the construction of a 1200 MW Pressurized Water Reactor power station at Hinkley Point (Hinkley Point ''C'') in the United Kingdom. The policy is to replicate the Sizewell ''B'' PWR design. The Hinkley Point ''C'' design is described indicating where changes in the Sizewell ''B'' design have been made to accommodate site differences. These are associated with the civil engineering construction and some of the electrical systems and do not affect the safety case. External hazards differ from site to site and the effect on the safety case of those specific to Hinkley Point are examined. The Chernobyl accident and the assessment of the United Kingdom PWR which was carried out subsequently are reviewed. The assessment indicated that no changes in the Sizewell ''B'' design and safety case were called for as a result of this accident; accident management developments are also reviewed, however. The CEGB's approach to minimizing occupational radiation doses is described. (UK)

  15. Irreducible descriptive sets of attributes for information systems

    KAUST Repository

    Moshkov, Mikhail; Skowron, Andrzej; Suraj, Zbigniew

    2010-01-01

    . An irreducible descriptive set for the considered information system S is a minimal (relative to the inclusion) set B of attributes which defines exactly the set Ext(S) by means of true and realizable rules constructed over attributes from the considered set B

  16. Inflation in non-minimal matter-curvature coupling theories

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, C.; Bertolami, O. [Departamento de Física e Astronomia and Centro de Física do Porto, Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre s/n, 4169-007 Porto (Portugal); Rosa, J.G., E-mail: claudio.gomes@fc.up.pt, E-mail: joao.rosa@ua.pt, E-mail: orfeu.bertolami@fc.up.pt [Departamento de Física da Universidade de Aveiro and CIDMA, Campus de Santiago, 3810-183 Aveiro (Portugal)

    2017-06-01

    We study inflationary scenarios driven by a scalar field in the presence of a non-minimal coupling between matter and curvature. We show that the Friedmann equation can be significantly modified when the energy density during inflation exceeds a critical value determined by the non-minimal coupling, which in turn may considerably modify the spectrum of primordial perturbations and the inflationary dynamics. In particular, we show that these models are characterised by a consistency relation between the tensor-to-scalar ratio and the tensor spectral index that can differ significantly from the predictions of general relativity. We also give examples of observational predictions for some of the most commonly considered potentials and use the results of the Planck collaboration to set limits on the scale of the non-minimal coupling.

  17. Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions

    KAUST Repository

    Azad, Mohammad

    2014-09-13

    The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.

  18. Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.

  19. Minimizing convex functions by continuous descent methods

    Directory of Open Access Journals (Sweden)

    Sergiu Aizicovici

    2010-01-01

    Full Text Available We study continuous descent methods for minimizing convex functions, defined on general Banach spaces, which are associated with an appropriate complete metric space of vector fields. We show that there exists an everywhere dense open set in this space of vector fields such that each of its elements generates strongly convergent trajectories.

  20. Deforestation of Peano continua and minimal deformation retracts☆

    Science.gov (United States)

    Conner, G.; Meilstrup, M.

    2012-01-01

    Every Peano continuum has a strong deformation retract to a deforested continuum, that is, one with no strongly contractible subsets attached at a single point. In a deforested continuum, each point with a one-dimensional neighborhood is either fixed by every self-homotopy of the space, or has a neighborhood which is a locally finite graph. A minimal deformation retract of a continuum (if it exists) is called its core. Every one-dimensional Peano continuum has a unique core, which can be obtained by deforestation. We give examples of planar Peano continua that contain no core but are deforested. PMID:23471120

  1. Regional control of Drosophila gut stem cell proliferation: EGF establishes GSSC proliferative set point & controls emergence from quiescence.

    Science.gov (United States)

    Strand, Marie; Micchelli, Craig A

    2013-01-01

    Adult stem cells vary widely in their rates of proliferation. Some stem cells are constitutively active, while others divide only in response to injury. The mechanism controlling this differential proliferative set point is not well understood. The anterior-posterior (A/P) axis of the adult Drosophila midgut has a segmental organization, displaying physiological compartmentalization and region-specific epithelia. These distinct midgut regions are maintained by defined stem cell populations with unique division schedules, providing an excellent experimental model with which to investigate this question. Here, we focus on the quiescent gastric stem cells (GSSCs) of the acidic copper cell region (CCR), which exhibit the greatest period of latency between divisions of all characterized gut stem cells, to define the molecular basis of differential stem cell activity. Our molecular genetic analysis demonstrates that the mitogenic EGF signaling pathway is a limiting factor controlling GSSC proliferation. We find that under baseline conditions, when GSSCs are largely quiescent, the lowest levels of EGF ligands in the midgut are found in the CCR. However, acute epithelial injury by enteric pathogens leads to an increase in EGF ligand expression in the CCR and rapid expansion of the GSSC lineage. Thus, the unique proliferative set points for gut stem cells residing in physiologically distinct compartments are governed by regional control of niche signals along the A/P axis.

  2. Regional control of Drosophila gut stem cell proliferation: EGF establishes GSSC proliferative set point & controls emergence from quiescence.

    Directory of Open Access Journals (Sweden)

    Marie Strand

    Full Text Available Adult stem cells vary widely in their rates of proliferation. Some stem cells are constitutively active, while others divide only in response to injury. The mechanism controlling this differential proliferative set point is not well understood. The anterior-posterior (A/P axis of the adult Drosophila midgut has a segmental organization, displaying physiological compartmentalization and region-specific epithelia. These distinct midgut regions are maintained by defined stem cell populations with unique division schedules, providing an excellent experimental model with which to investigate this question. Here, we focus on the quiescent gastric stem cells (GSSCs of the acidic copper cell region (CCR, which exhibit the greatest period of latency between divisions of all characterized gut stem cells, to define the molecular basis of differential stem cell activity. Our molecular genetic analysis demonstrates that the mitogenic EGF signaling pathway is a limiting factor controlling GSSC proliferation. We find that under baseline conditions, when GSSCs are largely quiescent, the lowest levels of EGF ligands in the midgut are found in the CCR. However, acute epithelial injury by enteric pathogens leads to an increase in EGF ligand expression in the CCR and rapid expansion of the GSSC lineage. Thus, the unique proliferative set points for gut stem cells residing in physiologically distinct compartments are governed by regional control of niche signals along the A/P axis.

  3. History Matching Through a Smooth Formulation of Multiple-Point Statistics

    DEFF Research Database (Denmark)

    Melnikova, Yulia; Zunino, Andrea; Lange, Katrine

    2014-01-01

    and the mismatch with multiple-point statistics. As a result, in the framework of the Bayesian approach, such a solution belongs to a high posterior region. The methodology, while applicable to any inverse problem with a training-image-based prior, is especially beneficial for problems which require expensive......We propose a smooth formulation of multiple-point statistics that enables us to solve inverse problems using gradient-based optimization techniques. We introduce a differentiable function that quantifies the mismatch between multiple-point statistics of a training image and of a given model. We...... show that, by minimizing this function, any continuous image can be gradually transformed into an image that honors the multiple-point statistics of the discrete training image. The solution to an inverse problem is then found by minimizing the sum of two mismatches: the mismatch with data...

  4. On Time with Minimal Expected Cost!

    DEFF Research Database (Denmark)

    David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand

    2014-01-01

    (Priced) timed games are two-player quantitative games involving an environment assumed to be completely antogonistic. Classical analysis consists in the synthesis of strategies ensuring safety, time-bounded or cost-bounded reachability objectives. Assuming a randomized environment, the (priced......) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...

  5. Minimizing System Modification in an Incremental Design Approach

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. We...

  6. On 3D Minimal Massive Gravity

    CERN Document Server

    Alishahiha, Mohsen; Naseh, Ali; Shirzad, Ahmad

    2014-12-03

    We study linearized equations of motion of the newly proposed three dimensional gravity, known as minimal massive gravity, using its metric formulation. We observe that the resultant linearized equations are exactly the same as that of TMG by making use of a redefinition of the parameters of the model. In particular the model admits logarithmic modes at the critical points. We also study several vacuum solutions of the model, specially at a certain limit where the contribution of Chern-Simons term vanishes.

  7. Asymptotic safety, emergence and minimal length

    International Nuclear Information System (INIS)

    Percacci, Roberto; Vacca, Gian Paolo

    2010-01-01

    There seems to be a common prejudice that asymptotic safety is either incompatible with, or at best unrelated to, the other topics in the title. This is not the case. In fact, we show that (1) the existence of a fixed point with suitable properties is a promising way of deriving emergent properties of gravity, and (2) there is a sense in which asymptotic safety implies a minimal length. In doing so we also discuss possible signatures of asymptotic safety in scattering experiments.

  8. Implementation of Waste Minimization at a complex R ampersand D site

    International Nuclear Information System (INIS)

    Lang, R.E.; Thuot, J.R.; Devgun, J.S.

    1995-01-01

    Under the 1994 Waste Minimization/Pollution Prevention Crosscut Plan, the Department of Energy (DOE) has set a goal of 50% reduction in waste at its facilities by the end of 1999. Each DOE site is required to set site-specific goals to reduce generation of all types of waste including hazardous, radioactive, and mixed. To meet these goals, Argonne National Laboratory (ANL), Argonne, IL, has developed and implemented a comprehensive Pollution Prevention/Waste Minimization (PP/WMin) Program. The facilities and activities at the site vary from research into basic sciences and research into nuclear fuel cycle to high energy physics and decontamination and decommissioning projects. As a multidisciplinary R ampersand D facility and a multiactivity site, ANL generates waste streams that are varied, in physical form as well as in chemical constituents. This in turn presents a significant challenge to put a cohesive site-wide PP/WMin Program into action. In this paper, we will describe ANL's key activities and waste streams, the regulatory drivers for waste minimization, and the DOE goals in this area, and we will discuss ANL's strategy for waste minimization and it's implementation across the site

  9. Minimal restsygdom ved maligne blodsygdomme I. Baggrund og praeklinisk validering

    DEFF Research Database (Denmark)

    Hokland, Peter; Nyvold, Charlotte Guldborg; Stentoft, Jesper

    2009-01-01

    In haematological malignancies, molecular markers like fusion DNA from balanced translocations, point mutations, or over-expressed genes can now be used not only for diagnosis, but also for determination of the minimal residual disease (MRD) after cytoreduction with a sensitivity by far exceeding...

  10. Random defect lines in conformal minimal models

    International Nuclear Information System (INIS)

    Jeng, M.; Ludwig, A.W.W.

    2001-01-01

    We analyze the effect of adding quenched disorder along a defect line in the 2D conformal minimal models using replicas. The disorder is realized by a random applied magnetic field in the Ising model, by fluctuations in the ferromagnetic bond coupling in the tricritical Ising model and tricritical three-state Potts model (the phi 12 operator), etc. We find that for the Ising model, the defect renormalizes to two decoupled half-planes without disorder, but that for all other models, the defect renormalizes to a disorder-dominated fixed point. Its critical properties are studied with an expansion in ε∝1/m for the mth Virasoro minimal model. The decay exponents X N =((N)/(2))1-((9(3N-4))/(4(m+1) 2 ))+O((3)/(m+1)) 3 of the Nth moment of the two-point function of phi 12 along the defect are obtained to 2-loop order, exhibiting multifractal behavior. This leads to a typical decay exponent X typ =((1)/(2))1+((9)/((m+1) 2 ))+O((3)/(m+1)) 3 . One-point functions are seen to have a non-self-averaging amplitude. The boundary entropy is larger than that of the pure system by order 1/m 3 . As a byproduct of our calculations, we also obtain to 2-loop order the exponent X-tilde N =N1-((2)/(9π 2 ))(3N-4)(q-2) 2 +O(q-2) 3 of the Nth moment of the energy operator in the q-state Potts model with bulk bond disorder

  11. December 2002 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data (latitude and longitude) from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs)...

  12. April 2004 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data (Geodetic Coordinates) from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from...

  13. March 2003 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data (Geodetic Coordinates) from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from...

  14. Minimally invasive surgery fellowship graduates: Their demographics, practice patterns, and contributions.

    Science.gov (United States)

    Park, Adrian E; Sutton, Erica R H; Heniford, B Todd

    2015-12-01

    Fellowship opportunities in minimally invasive surgery, bariatric, gastrointestinal, and hepatobiliary arose to address unmet training needs. The large cohort of non-Accreditation Council for Graduate Medical Education -accredited fellowship graduates (NACGMEG) has been difficult to track. In this, the largest survey of graduates to date, our goal was to characterize this unique group's demographics and professional activities. A total of 580 NACGMEG were surveyed covering 150 data points: demographics, practice patterns, academics, lifestyle, leadership, and maintenance of certification. Of 580 previous fellows, 234 responded. Demographics included: average age 37 years, 84% male, 75% in urban settings, 49% in purely academic practice, and 58% in practice maintenance of certification activities. Fellowship alumnae appear to be productive contributors to American surgery. They are clinically and academically active, believe endoscopy is important, have adopted continuous learning, and most assume work leadership roles. The majority acknowledge their fellowship training as having met expectations and uniquely equipping them for their current practice. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Minimal Liouville gravity correlation numbers from Douglas string equation

    International Nuclear Information System (INIS)

    Belavin, Alexander; Dubrovin, Boris; Mukhametzhanov, Baur

    2014-01-01

    We continue the study of (q,p) Minimal Liouville Gravity with the help of Douglas string equation. We generalize the results of http://dx.doi.org/10.1016/0550-3213(91)90548-Chttp://dx.doi.org/10.1088/1751-8113/42/30/304004, where Lee-Yang series (2,2s+1) was studied, to (3,3s+p 0 ) Minimal Liouville Gravity, where p 0 =1,2. We demonstrate that there exist such coordinates τ m,n on the space of the perturbed Minimal Liouville Gravity theories, in which the partition function of the theory is determined by the Douglas string equation. The coordinates τ m,n are related in a non-linear fashion to the natural coupling constants λ m,n of the perturbations of Minimal Lioville Gravity by the physical operators O m,n . We find this relation from the requirement that the correlation numbers in Minimal Liouville Gravity must satisfy the conformal and fusion selection rules. After fixing this relation we compute three- and four-point correlation numbers when they are not zero. The results are in agreement with the direct calculations in Minimal Liouville Gravity available in the literature http://dx.doi.org/10.1103/PhysRevLett.66.2051http://dx.doi.org/10.1007/s11232-005-0003-3http://dx.doi.org/10.1007/s11232-006-0075-8

  16. Motor synergies and the equilibrium-point hypothesis.

    Science.gov (United States)

    Latash, Mark L

    2010-07-01

    The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multijoint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed.

  17. Stochastic LMP (Locational marginal price) calculation method in distribution systems to minimize loss and emission based on Shapley value and two-point estimate method

    International Nuclear Information System (INIS)

    Azad-Farsani, Ehsan; Agah, S.M.M.; Askarian-Abyaneh, Hossein; Abedi, Mehrdad; Hosseinian, S.H.

    2016-01-01

    LMP (Locational marginal price) calculation is a serious impediment in distribution operation when private DG (distributed generation) units are connected to the network. A novel policy is developed in this study to guide distribution company (DISCO) to exert its control over the private units when power loss and green-house gases emissions are minimized. LMP at each DG bus is calculated according to the contribution of the DG to the reduced amount of loss and emission. An iterative algorithm which is based on the Shapley value method is proposed to allocate loss and emission reduction. The proposed algorithm will provide a robust state estimation tool for DISCOs in the next step of operation. The state estimation tool provides the decision maker with the ability to exert its control over private DG units when loss and emission are minimized. Also, a stochastic approach based on the PEM (point estimate method) is employed to capture uncertainty in the market price and load demand. The proposed methodology is applied to a realistic distribution network, and efficiency and accuracy of the method are verified. - Highlights: • Reduction of the loss and emission at the same time. • Fair allocation of loss and emission reduction. • Estimation of the system state using an iterative algorithm. • Ability of DISCOs to control DG units via the proposed policy. • Modeling the uncertainties to calculate the stochastic LMP.

  18. Generating minimal living systems from non-living materials and increasing their evolutionary abilities

    DEFF Research Database (Denmark)

    Rasmussen, Steen; Constantinescu, Adi; Svaneborg, Carsten

    2016-01-01

    We review lessons learned about evolutionary transitions from a bottom up construction of minimal life. We use a particular systemic protocell design process as a starting point for exploring two fundamental questions: (1) how may minimal living systems emerge from nonliving materials? - and (2......) how may minimal living systems support increasingly more evolutionary richness? Under (1) we present what has been accomplished so far and discuss the remaining open challenges and their possible solutions. Under (2) we present a design principle we have utilized successfully both for our...

  19. Non-minimally coupled tachyon field in teleparallel gravity

    Energy Technology Data Exchange (ETDEWEB)

    Fazlpour, Behnaz [Department of Physics, Babol Branch, Islamic Azad University, Shariati Street, Babol (Iran, Islamic Republic of); Banijamali, Ali, E-mail: b.fazlpour@umz.ac.ir, E-mail: a.banijamali@nit.ac.ir [Department of Basic Sciences, Babol University of Technology, Shariati Street, Babol (Iran, Islamic Republic of)

    2015-04-01

    We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P{sub 4}), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of this point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field.

  20. Non-minimally coupled tachyon field in teleparallel gravity

    International Nuclear Information System (INIS)

    Fazlpour, Behnaz; Banijamali, Ali

    2015-01-01

    We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P 4 ), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of this point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field

  1. Algorithms for detecting and analysing autocatalytic sets.

    Science.gov (United States)

    Hordijk, Wim; Smith, Joshua I; Steel, Mike

    2015-01-01

    Autocatalytic sets are considered to be fundamental to the origin of life. Prior theoretical and computational work on the existence and properties of these sets has relied on a fast algorithm for detectingself-sustaining autocatalytic sets in chemical reaction systems. Here, we introduce and apply a modified version and several extensions of the basic algorithm: (i) a modification aimed at reducing the number of calls to the computationally most expensive part of the algorithm, (ii) the application of a previously introduced extension of the basic algorithm to sample the smallest possible autocatalytic sets within a reaction network, and the application of a statistical test which provides a probable lower bound on the number of such smallest sets, (iii) the introduction and application of another extension of the basic algorithm to detect autocatalytic sets in a reaction system where molecules can also inhibit (as well as catalyse) reactions, (iv) a further, more abstract, extension of the theory behind searching for autocatalytic sets. (i) The modified algorithm outperforms the original one in the number of calls to the computationally most expensive procedure, which, in some cases also leads to a significant improvement in overall running time, (ii) our statistical test provides strong support for the existence of very large numbers (even millions) of minimal autocatalytic sets in a well-studied polymer model, where these minimal sets share about half of their reactions on average, (iii) "uninhibited" autocatalytic sets can be found in reaction systems that allow inhibition, but their number and sizes depend on the level of inhibition relative to the level of catalysis. (i) Improvements in the overall running time when searching for autocatalytic sets can potentially be obtained by using a modified version of the algorithm, (ii) the existence of large numbers of minimal autocatalytic sets can have important consequences for the possible evolvability of

  2. minimal pairs of polytopes and their number of vertices

    African Journals Online (AJOL)

    Preferred Customer

    Using this operation we give a new algorithm to reduce and find a minimal pair of polytopes from the given ... Key words/phrases: Pairs of compact convex sets, Blaschke addition, Minkowski sum, mnimality ... product K(X)×K(X) by K2. (X).

  3. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  4. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-09-29

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  5. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  6. Technical tips and advancements in pediatric minimally invasive surgical training on porcine based simulations.

    Science.gov (United States)

    Narayanan, Sarath Kumar; Cohen, Ralph Clinton; Shun, Albert

    2014-06-01

    Minimal access techniques have transformed the way pediatric surgery is practiced. Due to various constraints, surgical residency programs have not been able to tutor adequate training skills in the routine setting. The advent of new technology and methods in minimally invasive surgery (MIS), has similarly contributed to the need for systematic skills' training in a safe, simulated environment. To enable the training of the proper technique among pediatric surgery trainees, we have advanced a porcine non-survival model for endoscopic surgery. The technical advancements over the past 3 years and a subjective validation of the porcine model from 114 participating trainees using a standard questionnaire and a 5-point Likert scale have been described here. Mean attitude scores and analysis of variance (ANOVA) were used for statistical analysis of the data. Almost all trainees agreed or strongly agreed that the animal-based model was appropriate (98.35%) and also acknowledged that such workshops provided adequate practical experience before attempting on human subjects (96.6%). Mean attitude score for respondents was 19.08 (SD 3.4, range 4-20). Attitude scores showed no statistical association with years of experience or the level of seniority, indicating a positive attitude among all groups of respondents. Structured porcine-based MIS training should be an integral part of skill acquisition for pediatric surgery trainees and the experience gained can be transferred into clinical practice. We advocate that laparoscopic training should begin in a controlled workshop setting before procedures are attempted on human patients.

  7. CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization

    DEFF Research Database (Denmark)

    Borges, Pedro Manuel F. C.

    2000-01-01

    This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...

  8. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  9. Setting and validating the pass/fail score for the NBDHE.

    Science.gov (United States)

    Tsai, Tsung-Hsun; Dixon, Barbara Leatherman

    2013-04-01

    This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.

  10. Optimization for set-points and robust model predictive control for steam generator in nuclear power plants

    International Nuclear Information System (INIS)

    Osgouee, Ahmad

    2010-01-01

    many advanced control methods proposed for the control of nuclear SG water level, operators are still experiencing difficulties especially at low powers. Therefore, it seems that a suitable controller to replace the manual operations is still needed. In this paper optimization of SGL set-points and designing a robust control for SGL control system using will be discussed

  11. Genetic analysis of the gravitropic set-point angle in lateral roots of arabidopsis

    Science.gov (United States)

    Mullen, J. L.; Hangarter, R. P.

    2003-05-01

    Research on gravity responses in plants has mostly focused on primary roots and shoots, which typically orient to a vertical orientation. However, the distribution of lateral organs and their characteristically non-vertical growth orientation are critical for the determination of plant form. For example, in Arabidopsis, when lateral roots emerge from the primary root, they grow at a nearly horizontal orientation. As they elongate, the roots slowly curve until they eventually reach a vertical orientation. The regulation of this lateral root orientation is an important component affecting overall root system architecture. We found that this change in orientation is not simply due to the onset of gravitropic competence, as non-vertical lateral roots are capable of both positive and negative gravitropism. Thus, the horizontal growth of new lateral roots appears to be determined by what is called the gravitropic set-point angle (GSA). This developmental control of the GSA of lateral roots in Arabidopsis provides a useful system for investigating the components involved in regulating gravitropic responses. Using this system, we have identified several Arabidopsis mutants that have altered lateral root orientations but maintain normal primary root orientation.

  12. Comprehensive simulation-enhanced training curriculum for an advanced minimally invasive procedure: a randomized controlled trial.

    Science.gov (United States)

    Zevin, Boris; Dedy, Nicolas J; Bonrath, Esther M; Grantcharov, Teodor P

    2017-05-01

    There is no comprehensive simulation-enhanced training curriculum to address cognitive, psychomotor, and nontechnical skills for an advanced minimally invasive procedure. 1) To develop and provide evidence of validity for a comprehensive simulation-enhanced training (SET) curriculum for an advanced minimally invasive procedure; (2) to demonstrate transfer of acquired psychomotor skills from a simulation laboratory to live porcine model; and (3) to compare training outcomes of SET curriculum group and chief resident group. University. This prospective single-blinded, randomized, controlled trial allocated 20 intermediate-level surgery residents to receive either conventional training (control) or SET curriculum training (intervention). The SET curriculum consisted of cognitive, psychomotor, and nontechnical training modules. Psychomotor skills in a live anesthetized porcine model in the OR was the primary outcome. Knowledge of advanced minimally invasive and bariatric surgery and nontechnical skills in a simulated OR crisis scenario were the secondary outcomes. Residents in the SET curriculum group went on to perform a laparoscopic jejunojejunostomy in the OR. Cognitive, psychomotor, and nontechnical skills of SET curriculum group were also compared to a group of 12 chief surgery residents. SET curriculum group demonstrated superior psychomotor skills in a live porcine model (56 [47-62] versus 44 [38-53], Ppsychomotor skills in the live porcine model and in the OR in a human patient (56 [47-62] versus 63 [61-68]; P = .21). SET curriculum group demonstrated inferior knowledge (13 [11-15] versus 16 [14-16]; P<.05), equivalent psychomotor skill (63 [61-68] versus 68 [62-74]; P = .50), and superior nontechnical skills (41 [38-45] versus 34 [27-35], P<.01) compared with chief resident group. Completion of the SET curriculum resulted in superior training outcomes, compared with conventional surgery training. Implementation of the SET curriculum can standardize training

  13. Basic Minimal Dominating Functions of Quadratic Residue Cayley ...

    African Journals Online (AJOL)

    Domination arises in the study of numerous facility location problems where the number of facilities is fixed and one attempt to minimize the number of facilities necessary so that everyone is serviced. This problem reduces to finding a minimum dominating set in the graph corresponding to this network. In this paper we study ...

  14. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  15. Frequencies of digits, divergence points, and Schmidt games

    International Nuclear Information System (INIS)

    Olsen, L.

    2009-01-01

    Sets of divergence points, i.e. numbers x (or tuples of numbers) for which the limiting frequency of a given string of N-adic digits of x fails to exist, have recently attracted huge interest in the literature. In this paper we consider sets of simultaneous divergence points, i.e. numbers x (or tuples of numbers) for which the limiting frequencies of all strings of N-adic digits of x fail to exist. We show that many natural sets of simultaneous divergence points are (α, β)-wining sets in the sense of the Schmidt game. As an application we obtain lower bounds for the Hausdorff dimension of these sets.

  16. Indexing Moving Points

    DEFF Research Database (Denmark)

    Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff

    2003-01-01

    We propose three indexing schemes for storing a set S of N points in the plane, each moving along a linear trajectory, so that any query of the following form can be answered quickly: Given a rectangle R and a real value t, report all K points of S that lie inside R at time t. We first present an...

  17. Loss Minimization Sliding Mode Control of IPM Synchronous Motor Drives

    Directory of Open Access Journals (Sweden)

    Mehran Zamanifar

    2010-01-01

    Full Text Available In this paper, a nonlinear loss minimization control strategy for an interior permanent magnet synchronous motor (IPMSM based on a newly developed sliding mode approach is presented. This control method sets force the speed control of the IPMSM drives and simultaneously ensures the minimization of the losses besides the uncertainties exist in the system such as parameter variations which have undesirable effects on the controller performance except at near nominal conditions. Simulation results are presented to show the effectiveness of the proposed controller.

  18. Surgical Treatment of Carpal Tunnel Syndrome through a Minimal Incision on the Distal Wrist Crease: An Anatomical and Clinical Study

    Directory of Open Access Journals (Sweden)

    Hye Mi Yoo

    2015-05-01

    Full Text Available BackgroundAn anatomical analysis of the transverse carpal ligament (TCL and the surrounding structures might help in identifying effective measures to minimize complications. Here, we present a surgical technique based on an anatomical study that was successfully applied in clinical settings.MethodsUsing 13 hands from 8 formalin-fixed cadavers, we measured the TCL length and thickness, correlation between the distal wrist crease and the proximal end of the TCL, and distance between the distal end of the TCL and the palmar arch; the TCL cross sections and the thickest parts were also examined. Clinically, fasciotomy was performed on the relevant parts of 15 hands from 13 patients by making a minimally invasive incision on the distal wrist crease. Postoperatively, a two-point discrimination check was conducted in which the sensations of the first, second, and third fingertips and the palmar cutaneous branch injuries were monitored (average duration, 7 months.ResultsIn the 13 cadaveric hands, the distal wrist crease and the proximal end of the TCL were placed in the same location. The average length of the TCL and the distance from the distal TCL to the superficial palmar arch were 35.30±2.59 mm and 9.50±2.13 mm, respectively. The thickest part of the TCL was a region 25 mm distal to the distal wrist crease (average thickness, 4.00±0.57 mm. The 13 surgeries performed in the clinical settings yielded satisfactory results.ConclusionsThis peri-TCL anatomical study confirmed the safety of fasciotomy with a minimally invasive incision of the distal wrist crease. The clinical application of the technique indicated that the minimally invasive incision of the distal wrist crease was efficacious in the treatment of the carpal tunnel syndrome.

  19. Non-minimal Higgs inflation and frame dependence in cosmology

    International Nuclear Information System (INIS)

    Steinwachs, Christian F.; Kamenshchik, Alexander Yu.

    2013-01-01

    We investigate a very general class of cosmological models with scalar fields non-minimally coupled to gravity. A particular representative in this class is given by the non-minimal Higgs inflation model in which the Standard Model Higgs boson and the inflaton are described by one and the same scalar particle. While the predictions of the non-minimal Higgs inflation scenario come numerically remarkably close to the recently discovered mass of the Higgs boson, there remains a conceptual problem in this model that is associated with the choice of the cosmological frame. While the classical theory is independent of this choice, we find by an explicit calculation that already the first quantum corrections induce a frame dependence. We give a geometrical explanation of this frame dependence by embedding it into a more general field theoretical context. From this analysis, some conceptional points in the long lasting cosmological debate: 'Jordan frame vs. Einstein frame' become more transparent and in principle can be resolved in a natural way.

  20. Mixed low-level waste minimization at Los Alamos

    International Nuclear Information System (INIS)

    Starke, T.P.

    1998-01-01

    During the first six months of University of California 98 Fiscal Year (July--December) Los Alamos National Laboratory has achieved a 57% reduction in mixed low-level waste generation. This has been accomplished through a systems approach that identified and minimized the largest MLLW streams. These included surface-contaminated lead, lead-lined gloveboxes, printed circuit boards, and activated fluorescent lamps. Specific waste minimization projects have been initiated to address these streams. In addition, several chemical processing equipment upgrades are being implemented. Use of contaminated lead is planned for several high energy proton beam stop applications and stainless steel encapsulated lead is being evaluated for other radiological control area applications. INEEL is assisting Los Alamos with a complete systems analysis of analytical chemistry derived mixed wastes at the CMR building and with a minimum life-cycle cost standard glovebox design. Funding for waste minimization upgrades has come from several sources: generator programs, waste management, the generator set-aside program, and Defense Programs funding to INEEL

  1. Absolutely minimal extensions of functions on metric spaces

    International Nuclear Information System (INIS)

    Milman, V A

    1999-01-01

    Extensions of a real-valued function from the boundary ∂X 0 of an open subset X 0 of a metric space (X,d) to X 0 are discussed. For the broad class of initial data coming under discussion (linearly bounded functions) locally Lipschitz extensions to X 0 that preserve localized moduli of continuity are constructed. In the set of these extensions an absolutely minimal extension is selected, which was considered before by Aronsson for Lipschitz initial functions in the case X 0 subset of R n . An absolutely minimal extension can be regarded as an ∞-harmonic function, that is, a limit of p-harmonic functions as p→+∞. The proof of the existence of absolutely minimal extensions in a metric space with intrinsic metric is carried out by the Perron method. To this end, ∞-subharmonic, ∞-superharmonic, and ∞-harmonic functions on a metric space are defined and their properties are established

  2. Mixed low-level waste minimization at Los Alamos

    Energy Technology Data Exchange (ETDEWEB)

    Starke, T.P.

    1998-12-01

    During the first six months of University of California 98 Fiscal Year (July--December) Los Alamos National Laboratory has achieved a 57% reduction in mixed low-level waste generation. This has been accomplished through a systems approach that identified and minimized the largest MLLW streams. These included surface-contaminated lead, lead-lined gloveboxes, printed circuit boards, and activated fluorescent lamps. Specific waste minimization projects have been initiated to address these streams. In addition, several chemical processing equipment upgrades are being implemented. Use of contaminated lead is planned for several high energy proton beam stop applications and stainless steel encapsulated lead is being evaluated for other radiological control area applications. INEEL is assisting Los Alamos with a complete systems analysis of analytical chemistry derived mixed wastes at the CMR building and with a minimum life-cycle cost standard glovebox design. Funding for waste minimization upgrades has come from several sources: generator programs, waste management, the generator set-aside program, and Defense Programs funding to INEEL.

  3. Density functional theory calculations of the lowest energy quintet and triplet states of model hemes: role of functional, basis set, and zero-point energy corrections.

    Science.gov (United States)

    Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman

    2008-04-24

    We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results

  4. New Maximal Two-distance Sets

    DEFF Research Database (Denmark)

    Lisonek, Petr

    1996-01-01

    A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...

  5. Separations: The path to waste minimization

    International Nuclear Information System (INIS)

    Bell, J.T.

    1992-01-01

    Waste materials usually are composed of large amounts of innocuous and frequently useful components mixed with lesser amounts of one or more hazardous components. The ultimate path to waste minimization is the separation of the lesser quantities of hazardous components from the innocuous components, and then recycle the useful components. This vision is so simple that everyone would be expected to properly manage waste. Several parameters interfere with this proper waste management, which encourages the open-quotes sweep it under the rugclose quotes or the open-quotes bury it allclose quotes attitudes, both of which delay and complicate proper waste management. The two primary parameters that interfere with proper waste management are: economics drives a process to a product without concerns of waste minimization, and emergency needs for immediate production of a product usually delays proper waste management. A third parameter in recent years is also interfering with proper waste management: quick relief of waste insults to political and public perceptions is promoting the open-quotes bury it allclose quotes attitude. A fourth parameter can promote better waste management for any scenario that suffers either or all of the first three parameters: separations technology can minimize wastes when the application of this technology is not voided by influence of the first three parameters. The US Department of Energy's management of nuclear waste has been seriously affected by the above four parameters. This paper includes several points about how the generation and management of DOE wastes have been, and continue to be, affected by these parameters. Particular separations technologies for minimizing the DOE wastes that must be stored for long periods are highlighted

  6. Electric dipole moment constraints on minimal electroweak baryogenesis

    CERN Document Server

    Huber, S J; Ritz, A; Huber, Stephan J.; Pospelov, Maxim; Ritz, Adam

    2007-01-01

    We study the simplest generic extension of the Standard Model which allows for conventional electroweak baryogenesis, through the addition of dimension six operators in the Higgs sector. At least one such operator is required to be CP-odd, and we study the constraints on such a minimal setup, and related scenarios with minimal flavor violation, from the null results of searches for electric dipole moments (EDMs), utilizing the full set of two-loop contributions to the EDMs. The results indicate that the current bounds are stringent, particularly that of the recently updated neutron EDM, but fall short of ruling out these scenarios. The next generation of EDM experiments should be sufficiently sensitive to provide a conclusive test.

  7. Development and validation of a noncontact spectroscopic device for hemoglobin estimation at point-of-care

    Science.gov (United States)

    Sarkar, Probir Kumar; Pal, Sanchari; Polley, Nabarun; Aich, Rajarshi; Adhikari, Aniruddha; Halder, Animesh; Chakrabarti, Subhananda; Chakrabarti, Prantar; Pal, Samir Kumar

    2017-05-01

    Anemia severely and adversely affects human health and socioeconomic development. Measuring hemoglobin with the minimal involvement of human and financial resources has always been challenging. We describe a translational spectroscopic technique for noncontact hemoglobin measurement at low-resource point-of-care settings in human subjects, independent of their skin color, age, and sex, by measuring the optical spectrum of the blood flowing in the vascular bed of the bulbar conjunctiva. We developed software on the LabVIEW platform for automatic data acquisition and interpretation by nonexperts. The device is calibrated by comparing the differential absorbance of light of wavelength 576 and 600 nm with the clinical hemoglobin level of the subject. Our proposed method is consistent with the results obtained using the current gold standard, the automated hematology analyzer. The proposed noncontact optical device for hemoglobin estimation is highly efficient, inexpensive, feasible, and extremely useful in low-resource point-of-care settings. The device output correlates with the different degrees of anemia with absolute and trending accuracy similar to those of widely used invasive methods. Moreover, the device can instantaneously transmit the generated report to a medical expert through e-mail, text messaging, or mobile apps.

  8. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.

  9. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  10. Characterizing fixed points

    Directory of Open Access Journals (Sweden)

    Sanjo Zlobec

    2017-04-01

    Full Text Available A set of sufficient conditions which guarantee the existence of a point x⋆ such that f(x⋆ = x⋆ is called a "fixed point theorem". Many such theorems are named after well-known mathematicians and economists. Fixed point theorems are among most useful ones in applied mathematics, especially in economics and game theory. Particularly important theorem in these areas is Kakutani's fixed point theorem which ensures existence of fixed point for point-to-set mappings, e.g., [2, 3, 4]. John Nash developed and applied Kakutani's ideas to prove the existence of (what became known as "Nash equilibrium" for finite games with mixed strategies for any number of players. This work earned him a Nobel Prize in Economics that he shared with two mathematicians. Nash's life was dramatized in the movie "Beautiful Mind" in 2001. In this paper, we approach the system f(x = x differently. Instead of studying existence of its solutions our objective is to determine conditions which are both necessary and sufficient that an arbitrary point x⋆ is a fixed point, i.e., that it satisfies f(x⋆ = x⋆. The existence of solutions for continuous function f of the single variable is easy to establish using the Intermediate Value Theorem of Calculus. However, characterizing fixed points x⋆, i.e., providing answers to the question of finding both necessary and sufficient conditions for an arbitrary given x⋆ to satisfy f(x⋆ = x⋆, is not simple even for functions of the single variable. It is possible that constructive answers do not exist. Our objective is to find them. Our work may require some less familiar tools. One of these might be the "quadratic envelope characterization of zero-derivative point" recalled in the next section. The results are taken from the author's current research project "Studying the Essence of Fixed Points". They are believed to be original. The author has received several feedbacks on the preliminary report and on parts of the project

  11. Segmentation of Synchrotron Radiation micro-Computed Tomography Images using Energy Minimization via Graph Cuts

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Anderson A.M. [Federal University of Western Para (Brazil); Physics Institute, Rio de Janeiro State University (Brazil); Giusti, Alessandro [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Almeida, Andre P. de, E-mail: apalmeid@gmail.com [Physics Institute, Rio de Janeiro State University (Brazil); Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Nogueira, Liebert; Braz, Delson [Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Almeida, Carlos E. de [Radiological Sciences Laboratory, Rio de Janeiro State University (Brazil); Barroso, Regina C. [Physics Institute, Rio de Janeiro State University (Brazil)

    2012-07-15

    The research on applications of segmentation algorithms to Synchrotron Radiation X-Ray micro-Computed Tomography (SR-{mu}CT) is an open problem, due to the interesting and well-known characteristics of SR images, such as the phase contrast effect. The Energy Minimization via Graph Cuts (EMvGC) algorithm represents state-of-art segmentation algorithm, presenting an enormous potential of application in SR-{mu}CT imaging. We describe the application of the algorithm EMvGC with swap move for the segmentation of bone images acquired at the ELETTRA Laboratory (Trieste, Italy). - Highlights: Black-Right-Pointing-Pointer Microstructures of Wistar rats' ribs are investigated with Synchrotron Radiation {mu}CT imaging. Black-Right-Pointing-Pointer The present work is part of a research on the effects of radiotherapy on the thoracic region. Black-Right-Pointing-Pointer Application of the Energy Minimization via Graph Cuts algorithm for segmentation is described.

  12. Hawaii ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for seabird nesting colonies in coastal Hawaii. Vector points in this data set represent locations of...

  13. Maryland ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for raptors in Maryland. Vector points in this data set represent bird nesting sites. Species-specific...

  14. Virginia ESI: REPTPT (Reptile Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for sea turtles in Virginia. Vector points in this data set represent nesting sites. Species-specific...

  15. International Spinal Cord Injury Urinary Tract Infection Basic Data Set

    DEFF Research Database (Denmark)

    Goetz, L L; Cardenas, D D; Kennelly, M

    2013-01-01

    To develop an International Spinal Cord Injury (SCI) Urinary Tract Infection (UTI) Basic Data Set presenting a standardized format for the collection and reporting of a minimal amount of information on UTIs in daily practice or research.......To develop an International Spinal Cord Injury (SCI) Urinary Tract Infection (UTI) Basic Data Set presenting a standardized format for the collection and reporting of a minimal amount of information on UTIs in daily practice or research....

  16. Interesting Interest Points

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Lindbjerg; Pedersen, Kim Steenstrup

    2012-01-01

    on spatial invariance of interest points under changing acquisition parameters by measuring the spatial recall rate. The scope of this paper is to investigate the performance of a number of existing well-established interest point detection methods. Automatic performance evaluation of interest points is hard......Not all interest points are equally interesting. The most valuable interest points lead to optimal performance of the computer vision method in which they are employed. But a measure of this kind will be dependent on the chosen vision application. We propose a more general performance measure based...... position. The LED illumination provides the option for artificially relighting the scene from a range of light directions. This data set has given us the ability to systematically evaluate the performance of a number of interest point detectors. The highlights of the conclusions are that the fixed scale...

  17. Periodical cicadas: A minimal automaton model

    Science.gov (United States)

    de O. Cardozo, Giovano; de A. M. M. Silvestre, Daniel; Colato, Alexandre

    2007-08-01

    The Magicicada spp. life cycles with its prime periods and highly synchronized emergence have defied reasonable scientific explanation since its discovery. During the last decade several models and explanations for this phenomenon appeared in the literature along with a great deal of discussion. Despite this considerable effort, there is no final conclusion about this long standing biological problem. Here, we construct a minimal automaton model without predation/parasitism which reproduces some of these aspects. Our results point towards competition between different strains with limited dispersal threshold as the main factor leading to the emergence of prime numbered life cycles.

  18. Louisiana ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for seabird and wading bird nesting colonies in coastal Louisiana. Vector points in this data set represent...

  19. National proficiency-gain curves for minimally invasive gastrointestinal cancer surgery.

    Science.gov (United States)

    Mackenzie, H; Markar, S R; Askari, A; Ni, M; Faiz, O; Hanna, G B

    2016-01-01

    Minimal access surgery for gastrointestinal cancer has short-term benefits but is associated with a proficiency-gain curve. The aim of this study was to define national proficiency-gain curves for minimal access colorectal and oesophagogastric surgery, and to determine the impact on clinical outcomes. All adult patients undergoing minimal access oesophageal, colonic and rectal surgery between 2002 and 2012 were identified from the Hospital Episode Statistics database. Proficiency-gain curves were created using risk-adjusted cumulative sum analysis. Change points were identified, and bootstrapping was performed with 1000 iterations to identify a confidence level. The primary outcome was 30-day mortality; secondary outcomes were 90-day mortality, reintervention, conversion and length of hospital stay. Some 1696, 15 008 and 16 701 minimal access oesophageal, rectal and colonic cancer resections were performed during the study period. The change point in the proficiency-gain curve for 30-day mortality for oesophageal, rectal and colonic surgery was 19 (confidence level 98·4 per cent), 20 (99·2 per cent) and three (99·5 per cent) procedures; the mortality rate fell from 4·0 to 2·0 per cent (relative risk reduction (RRR) 0·50, P = 0·033), from 2·1 to 1·2 per cent (RRR 0·43, P curve for reintervention in oesophageal, rectal and colonic resection was 19 (98·1 per cent), 32 (99·5 per cent) and 26 (99·2 per cent) procedures respectively. There were also significant proficiency-gain curves for 90-day mortality, conversion and length of stay. The introduction of minimal access gastrointestinal cancer surgery has been associated with a proficiency-gain curve for mortality and major morbidity at a national level. Unnecessary patient harm should be avoided by appropriate training and monitoring of new surgical techniques. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.

  20. The introduction of syphilis point of care tests in resource limited settings.

    Science.gov (United States)

    Marks, Michael; Mabey, David Cw

    2017-04-01

    Syphilis remains an important and preventable cause of stillbirth and neonatal mortality. About 1 million women with active syphilis become pregnant each year. Without treatment, 25% of them will deliver a stillborn baby and 33% a low birth weight baby with an increased chance of dying in the first month of life. Adverse pregnancy outcomes due to syphilis can be prevented by screening pregnant women, and treating those who test positive with a single dose of penicillin before 28 weeks' gestation. Areas covered: This manuscript covers the impact of syphilis on pregnancy outcome, the diagnosis of syphilis, with a special focus on point of care (POC) tests, and challenges to the introduction of POC tests, and their potential impact on the control and prevention of syphilis in resource limited settings. Expert commentary: POC tests for syphilis are available which meet the ASSURED criteria, and could make syphilis screening accessible to all women anywhere in the world who attend an antenatal clinic. High quality dual POC tests for HIV and syphilis could ensure that well-funded programmes for the prevention of mother to child transmission of HIV can contribute towards increased coverage of antenatal syphilis screening, and prevent more than 300,000 adverse pregnancy outcomes due to syphilis annually. Alongside investment to increase availability of syphilis POC tests, operational research is needed to understand how best to improve screening of pregnant women and to translate test availability into improved pregnancy outcomes.

  1. Lambda based control O{sub 2} set point optimisation and evaluation; Lambdabaserad reglering. Boervaerdesoptimering av O{sub 2} och utvaerdering

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Mikael; Brodin, Peter [Vattenfall Utveckling, Aelvkarleby (Sweden)

    2004-10-01

    During winter and spring 2003, the project 'Lambda based control' was carried out at Vattenfall Utveckling AB in Aelvkarleby, Sweden. The main purpose of the project was to explore if conventional lambda sensors could be used to control the fuel/air-ratio in small boilers. The conclusion was that this is possible. To be able to make use of the result, the question of what the numerical set value for O{sub 2} should be, has to be answered. Several parameters have impact on the oxygen level in combustion gas. The main purpose of this project is to explore if there is a cost efficient way of controlling fuel/air-ratio by using lambda sensors. The scope of the project is achieve the following, by using the experience from project P4-209: find out which parameters that correlate most strongly with lambda; develop a method to decide which and how many parameters to use, in order to optimize cost efficiency; calculate optimal set value for O{sub 2} in one of the boilers used for experiments in the project; and evaluate the method and compare important parameters of operation, such as efficiency and emissions. The method developed in the project uses initial measurements to find out the relation between O{sub 2} and emissions at different power levels. Then a set point curve is calculated where set point for O{sub 2} is expressed as a function of power level in the current boiler. The method has been implemented and evaluated at a 400 kW boiler in Aelvkarleby, Sweden. The results are improvements in efficiency (6 %) and emissions, CO decreased 40 %, NO decreased by 20 %. The conclusion is that lambda based control according to this method could be a profitable investment under the right circumstances, where stability in characteristics is the most important property. What makes the method uncertain is its inability to handle changes in characteristics of a boiler.

  2. Visibility of noisy point cloud data

    KAUST Repository

    Mehra, Ravish

    2010-06-01

    We present a robust algorithm for estimating visibility from a given viewpoint for a point set containing concavities, non-uniformly spaced samples, and possibly corrupted with noise. Instead of performing an explicit surface reconstruction for the points set, visibility is computed based on a construction involving convex hull in a dual space, an idea inspired by the work of Katz et al. [26]. We derive theoretical bounds on the behavior of the method in the presence of noise and concavities, and use the derivations to develop a robust visibility estimation algorithm. In addition, computing visibility from a set of adaptively placed viewpoints allows us to generate locally consistent partial reconstructions. Using a graph based approximation algorithm we couple such reconstructions to extract globally consistent reconstructions. We test our method on a variety of 2D and 3D point sets of varying complexity and noise content. © 2010 Elsevier Ltd. All rights reserved.

  3. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  4. Close set in volleyball. Differences and discriminatory power of final game actions in formative stages

    Directory of Open Access Journals (Sweden)

    Dávila Romero, Carlos

    2012-01-01

    Full Text Available The aim of this study was to establish which final game actions discriminate final victories and defeats in women volleyball formative stages during ended sets, which are solved by regulation with a minimal advantage of two points. A total of 57 sets were analyzed in infantile category and 69 sets in cadet category during the national volleyball championship at school age (12-16, Valladolid 2008 and Huelva 2009. Statistics analysis shows differences between the condition of both the winner and the loser teams in sets in the positive attack in infantile category and in positive block and error in cadet category. Discriminative analysis, statistical test that determines those most significant game actions when establishing the differences existing between the winner and loser teams, shows how to predict victory and defeat in the ended sets regarding the positive attack, negative service and error in infantile category and positive block and error in cadet category. These ones suggest that during decisive set moments in both categories, either technical gestures control or errors which arise from a regulation infraction may predict their final performance.

  5. The power of PowerPoint.

    Science.gov (United States)

    Niamtu , J

    2001-08-01

    Carousel slide presentations have been used for academic and clinical presentations since the late 1950s. However, advances in computer technology have caused a paradigm shift, and digital presentations are quickly becoming standard for clinical presentations. The advantages of digital presentations include cost savings; portability; easy updating capability; Internet access; multimedia functions, such as animation, pictures, video, and sound; and customization to augment audience interest and attention. Microsoft PowerPoint has emerged as the most popular digital presentation software and is currently used by many practitioners with and without significant computer expertise. The user-friendly platform of PowerPoint enables even the novice presenter to incorporate digital presentations into his or her profession. PowerPoint offers many advanced options that, with a minimal investment of time, can be used to create more interactive and professional presentations for lectures, patient education, and marketing. Examples of advanced PowerPoint applications are presented in a stepwise manner to unveil the full power of PowerPoint. By incorporating these techniques, medical practitioners can easily personalize, customize, and enhance their PowerPoint presentations. Complications, pitfalls, and caveats are discussed to detour and prevent misadventures in digital presentations. Relevant Web sites are listed to further update, customize, and communicate PowerPoint techniques.

  6. Non-Asymptotic Confidence Sets for Circular Means

    Directory of Open Access Journals (Sweden)

    Thomas Hotz

    2016-10-01

    Full Text Available The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set.

  7. Assessing and minimizing contamination in time of flight based validation data

    Science.gov (United States)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

  8. Three Point Functions in Higher Spin AdS3 Holography with 1/N Corrections

    Directory of Open Access Journals (Sweden)

    Yasuaki Hikida

    2017-10-01

    Full Text Available We examine three point functions with two scalar operators and a higher spin current in 2d W N minimal model to the next non-trivial order in 1 / N expansion. The minimal model was proposed to be dual to a 3d higher spin gauge theory, and 1 / N corrections should be interpreted as quantum effects in the dual gravity theory. We develop a simple and systematic method to obtain three point functions by decomposing four point functions of scalar operators with Virasoro conformal blocks. Applying the method, we reproduce known results at the leading order in 1 / N and obtain new ones at the next leading order. As confirmation, we check that our results satisfy relations among three point functions conjectured before.

  9. Non-minimal Higgs inflation and frame dependence in cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Steinwachs, Christian F. [School of Mathematical Sciences, University of Nottingham University Park, Nottingham, NG7 2RD (United Kingdom); Kamenshchik, Alexander Yu. [Dipartimento di Fisica e Astronomia and INFN, Via Irnerio 46, 40126 Bologna, Italy and L.D. Landau Institute for Theoretical Physics of the Russian Academy of Sciences, Kosygin str. 2, 119334 Moscow (Russian Federation)

    2013-02-21

    We investigate a very general class of cosmological models with scalar fields non-minimally coupled to gravity. A particular representative in this class is given by the non-minimal Higgs inflation model in which the Standard Model Higgs boson and the inflaton are described by one and the same scalar particle. While the predictions of the non-minimal Higgs inflation scenario come numerically remarkably close to the recently discovered mass of the Higgs boson, there remains a conceptual problem in this model that is associated with the choice of the cosmological frame. While the classical theory is independent of this choice, we find by an explicit calculation that already the first quantum corrections induce a frame dependence. We give a geometrical explanation of this frame dependence by embedding it into a more general field theoretical context. From this analysis, some conceptional points in the long lasting cosmological debate: 'Jordan frame vs. Einstein frame' become more transparent and in principle can be resolved in a natural way.

  10. Sub-quadratic decoding of one-point hermitian codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Beelen, Peter

    2015-01-01

    We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realization of the Guruswami-Sudan algorithm using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimization. The second is a power...... decoding algorithm: an extension of classical key equation decoding which gives a probabilistic decoding algorithm up to the Sudan radius. We show how the resulting key equations can be solved by the matrix minimization algorithms from computer algebra, yielding similar asymptotic complexities....

  11. Statistical quality control a loss minimization approach

    CERN Document Server

    Trietsch, Dan

    1999-01-01

    While many books on quality espouse the Taguchi loss function, they do not examine its impact on statistical quality control (SQC). But using the Taguchi loss function sheds new light on questions relating to SQC and calls for some changes. This book covers SQC in a way that conforms with the need to minimize loss. Subjects often not covered elsewhere include: (i) measurements, (ii) determining how many points to sample to obtain reliable control charts (for which purpose a new graphic tool, diffidence charts, is introduced), (iii) the connection between process capability and tolerances, (iv)

  12. Minimizing communication cost among distributed controllers in software defined networks

    Science.gov (United States)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  13. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    International Nuclear Information System (INIS)

    Xu, H; Guerrero, M; Prado, K; Yi, B

    2016-01-01

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  14. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H; Guerrero, M; Prado, K; Yi, B [University of Maryland School of Medicine, Baltimore, MD (United States)

    2016-06-15

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  15. On Motivating Operations at the Point of Online Purchase Setting

    Science.gov (United States)

    Fagerstrom, Asle; Arntzen, Erik

    2013-01-01

    Consumer behavior analysis can be applied over a wide range of economic topics in which the main focus is the contingencies that influence the behavior of the economic agent. This paper provides an overview on the work that has been done on the impact from motivating operations at the point of online purchase situation. Motivating operations, a…

  16. New Technique for Improving Performance of LDPC Codes in the Presence of Trapping Sets

    Directory of Open Access Journals (Sweden)

    Mohamed Adnan Landolsi

    2008-06-01

    Full Text Available Trapping sets are considered the primary factor for degrading the performance of low-density parity-check (LDPC codes in the error-floor region. The effect of trapping sets on the performance of an LDPC code becomes worse as the code size decreases. One approach to tackle this problem is to minimize trapping sets during LDPC code design. However, while trapping sets can be reduced, their complete elimination is infeasible due to the presence of cycles in the underlying LDPC code bipartite graph. In this work, we introduce a new technique based on trapping sets neutralization to minimize the negative effect of trapping sets under belief propagation (BP decoding. Simulation results for random, progressive edge growth (PEG and MacKay LDPC codes demonstrate the effectiveness of the proposed technique. The hardware cost of the proposed technique is also shown to be minimal.

  17. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    Science.gov (United States)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  18. Pro SharePoint 2013 administration

    CERN Document Server

    Garrett, Robert

    2013-01-01

    Pro SharePoint 2013 Administration is a practical guide to SharePoint 2013 for intermediate to advanced SharePoint administrators and power users, covering the out-of-the-box feature set and capabilities of Microsoft's collaboration and business productivity platform. SharePoint 2013 is an incredibly complex product, with many moving parts, new features, best practices, and 'gotchas.' Author Rob Garrett distills SharePoint's portfolio of features, capabilities, and utilities into an in-depth professional guide-with no fluff and copious advice-that is designed from scratch to be the manual Micr

  19. Set optimization and applications the state of the art : from set relations to set-valued risk measures

    CERN Document Server

    Heyde, Frank; Löhne, Andreas; Rudloff, Birgit; Schrage, Carola

    2015-01-01

    This volume presents five surveys with extensive bibliographies and six original contributions on set optimization and its applications in mathematical finance and game theory. The topics range from more conventional approaches that look for minimal/maximal elements with respect to vector orders or set relations, to the new complete-lattice approach that comprises a coherent solution concept for set optimization problems, along with existence results, duality theorems, optimality conditions, variational inequalities and theoretical foundations for algorithms. Modern approaches to scalarization methods can be found as well as a fundamental contribution to conditional analysis. The theory is tailor-made for financial applications, in particular risk evaluation and [super-]hedging for market models with transaction costs, but it also provides a refreshing new perspective on vector optimization. There is no comparable volume on the market, making the book an invaluable resource for researchers working in vector o...

  20. Activity recognition from minimal distinguishing subsequence mining

    Science.gov (United States)

    Iqbal, Mohammad; Pao, Hsing-Kuo

    2017-08-01

    Human activity recognition is one of the most important research topics in the era of Internet of Things. To separate different activities given sensory data, we utilize a Minimal Distinguishing Subsequence (MDS) mining approach to efficiently find distinguishing patterns among different activities. We first transform the sensory data into a series of sensor triggering events and operate the MDS mining procedure afterwards. The gap constraints are also considered in the MDS mining. Given the multi-class nature of most activity recognition tasks, we modify the MDS mining approach from a binary case to a multi-class one to fit the need for multiple activity recognition. We also study how to select the best parameter set including the minimal and the maximal support thresholds in finding the MDSs for effective activity recognition. Overall, the prediction accuracy is 86.59% on the van Kasteren dataset which consists of four different activities for recognition.

  1. Evaluation of a School-Based Teen Obesity Prevention Minimal Intervention

    Science.gov (United States)

    Abood, Doris A.; Black, David R.; Coster, Daniel C.

    2008-01-01

    Objective: A school-based nutrition education minimal intervention (MI) was evaluated. Design: The design was experimental, with random assignment at the school level. Setting: Seven schools were randomly assigned as experimental, and 7 as delayed-treatment. Participants: The experimental group included 551 teens, and the delayed treatment group…

  2. Minimization of waste volumes by means of pin-pointed decontamination during decommissioning measures. Final report

    International Nuclear Information System (INIS)

    Henschel, K.; Jacobs, W.; Kanitz, L.; Schildbach, T.

    1992-06-01

    This semi-automated equipment is able to remove surface building contamination as well as take radioactive measurements. This equipment is newly developed. The goal of the equipment is to improve the identification of areas of contamination and the compounding decontamination of epoxy layer building construction material by using commercially available components minimizing the waste volume. A system design for decommissioning of building surfaces was developed, selected components were tested and their function certified. With this systems concept the decontamination of fixed epoxy layers up to 20 m in height is possible. Operational data for the system are available. (orig.) [de

  3. Hybrid fixed point in CAT(0 spaces

    Directory of Open Access Journals (Sweden)

    Hemant Kumar Pathak

    2018-02-01

    Full Text Available In this paper, we introduce an ultrapower approach to prove fixed point theorems for $H^{+}$-nonexpansive multi-valued mappings in the setting of CAT(0 spaces and prove several hybrid fixed point results in CAT(0 spaces for families of single-valued nonexpansive or quasinonexpansive mappings and multi-valued upper semicontinuous, almost lower semicontinuous or $H^{+}$-nonexpansive mappings which are weakly commuting. We also establish a result about structure of the set of fixed points of $H^{+}$-quasinonexpansive mapping on a CAT(0 space.

  4. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  5. Kundt solutions of minimal massive 3D gravity

    Science.gov (United States)

    Deger, Nihat Sadik; Sarıoǧlu, Ã.-zgür

    2015-11-01

    We construct Kundt solutions of minimal massive gravity theory and show that, similar to topologically massive gravity (TMG), most of them are constant scalar invariant (CSI) spacetimes that correspond to deformations of round and warped (A)dS. We also find an explicit non-CSI Kundt solution at the merger point. Finally, we give their algebraic classification with respect to the traceless Ricci tensor (Segre classification) and show that their Segre types match with the types of their counterparts in TMG.

  6. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yunlong; Wang, Aiping; Guo, Lei; Wang, Hong

    2017-07-09

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  7. Some notes on tetrahedrally closed spherical sets in Euclidean spaces

    Indian Academy of Sciences (India)

    47

    is a relation between these sets. P is called the point set, L the line set and I the incidence relation. A point-line geometry S = (P,L,I) is called a near polygon if every two distinct points are incident with at most one line and if for every point x and every line L, there exists a unique point on L that is nearest to x with respect to ...

  8. Columbia River ESI: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector points and lines representing human-use resource data for Columbia River. In the data set, vector points represent aquaculture sites,...

  9. Smartphone-assisted minimally invasive neurosurgery.

    Science.gov (United States)

    Mandel, Mauricio; Petito, Carlo Emanuel; Tutihashi, Rafael; Paiva, Wellingson; Abramovicz Mandel, Suzana; Gomes Pinto, Fernando Campos; Ferreira de Andrade, Almir; Teixeira, Manoel Jacobsen; Figueiredo, Eberval Gadelha

    2018-03-13

    OBJECTIVE Advances in video and fiber optics since the 1990s have led to the development of several commercially available high-definition neuroendoscopes. This technological improvement, however, has been surpassed by the smartphone revolution. With the increasing integration of smartphone technology into medical care, the introduction of these high-quality computerized communication devices with built-in digital cameras offers new possibilities in neuroendoscopy. The aim of this study was to investigate the usefulness of smartphone-endoscope integration in performing different types of minimally invasive neurosurgery. METHODS The authors present a new surgical tool that integrates a smartphone with an endoscope by use of a specially designed adapter, thus eliminating the need for the video system customarily used for endoscopy. The authors used this novel combined system to perform minimally invasive surgery on patients with various neuropathological disorders, including cavernomas, cerebral aneurysms, hydrocephalus, subdural hematomas, contusional hematomas, and spontaneous intracerebral hematomas. RESULTS The new endoscopic system featuring smartphone-endoscope integration was used by the authors in the minimally invasive surgical treatment of 42 patients. All procedures were successfully performed, and no complications related to the use of the new method were observed. The quality of the images obtained with the smartphone was high enough to provide adequate information to the neurosurgeons, as smartphone cameras can record images in high definition or 4K resolution. Moreover, because the smartphone screen moves along with the endoscope, surgical mobility was enhanced with the use of this method, facilitating more intuitive use. In fact, this increased mobility was identified as the greatest benefit of the use of the smartphone-endoscope system compared with the use of the neuroendoscope with the standard video set. CONCLUSIONS Minimally invasive approaches

  10. Experience with the EPA manual for waste minimization opportunity assessments

    International Nuclear Information System (INIS)

    Bridges, J.S.

    1990-01-01

    The EPA Waste Minimization Opportunity Assessment Manual (EPA/625/788/003) was published to assist those responsible for managing waste minimization activities at the waste generating facility and at corporate levels. The Manual sets forth a procedure that incorporates technical and managerial principles and motivates people to develop and implement pollution prevention concepts and ideas. Environmental management has increasingly become one of cooperative endeavor whereby whether in government, industry, or other forms of enterprise, the effectiveness with whirl, people work together toward the attainment of a clean environment is largely determined by the ability of those who hold managerial position. This paper offers a description of the EPA Waste Minimization Opportunity Assessment Manual procedure which supports the waste minimization assessment as a systematic planned procedure with the objective of identifying ways to reduce or eliminate waste generation. The Manual is a management tool that blends science and management principles. The practice of managing waste minimization/pollution prevention makes use of the underlying organized science and engineering knowledge and applies it in the light of realities to gain a desired, practical result. The early stages of EPA's Pollution Prevention Research Program centered on the development of the Manual and its use at a number of facilities within the private and public sectors. This paper identifies a number of case studies and waste minimization opportunity assessment reports that demonstrate the value of using the Manual's approach. Several industry-specific waste minimization assessment manuals have resulted from the Manual's generic approach to waste minimization. There were some modifications to the Manual's generic approach when the waste stream has been other than industrial hazardous waste

  11. Point-of-care blood eosinophil count in a severe asthma clinic setting.

    Science.gov (United States)

    Heffler, Enrico; Terranova, Giovanni; Chessari, Carlo; Frazzetto, Valentina; Crimi, Claudia; Fichera, Silvia; Picardi, Giuseppe; Nicolosi, Giuliana; Porto, Morena; Intravaia, Rossella; Crimi, Nunzio

    2017-07-01

    One of the main severe asthma phenotypes is severe eosinophilic or eosinophilic refractory asthma for which novel biologic agents are emerging as therapeutic options. In this context, blood eosinophil counts are one of the most reliable biomarkers. To evaluate the performance of a point-of-care peripheral blood counter in a patients with severe asthma. The blood eosinophil counts of 76 patients with severe asthma were evaluated by point-of-care and standard analyzers. A significant correlation between blood eosinophils assessed by the 2 devices was found (R 2  = 0.854, P asthma and the ELEN index, a composite score useful to predict sputum eosinophilia. The results of our study contribute to the validation of a point-of-care device to assess blood eosinophils and open the possibility of using this device for the management of severe asthma management. Copyright © 2017 American College of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  12. Complete on-shell renormalization scheme for the minimal supersymmetric Higgs sector

    International Nuclear Information System (INIS)

    Chankowski, P.H.; Pokorski, Stefan; Rosiek, Janusz

    1994-01-01

    A systematic on-shell renormalization programme is carried out for the Higgs and gauge boson sectors of the Minimal Supersymmetric Standard Model. Complete one-loop results for the 2- and 3-point Green's functions are explicitly given. The Higgs boson masses and the production cross sections in the e + e - colliders are calculated. ((orig.))

  13. Quantitative structure-property relationships for prediction of boiling point, vapor pressure, and melting point.

    Science.gov (United States)

    Dearden, John C

    2003-08-01

    Boiling point, vapor pressure, and melting point are important physicochemical properties in the modeling of the distribution and fate of chemicals in the environment. However, such data often are not available, and therefore must be estimated. Over the years, many attempts have been made to calculate boiling points, vapor pressures, and melting points by using quantitative structure-property relationships, and this review examines and discusses the work published in this area, and concentrates particularly on recent studies. A number of software programs are commercially available for the calculation of boiling point, vapor pressure, and melting point, and these have been tested for their predictive ability with a test set of 100 organic chemicals.

  14. Automatic continuous dew point measurement in combustion gases

    Energy Technology Data Exchange (ETDEWEB)

    Fehler, D.

    1986-08-01

    Low exhaust temperatures serve to minimize energy consumption in combustion systems. This requires accurate, continuous measurement of exhaust condensation. An automatic dew point meter for continuous operation is described. The principle of measurement, the design of the measuring system, and practical aspects of operation are discussed.

  15. Determining decoupling points in a supply chain networks using NSGA II algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ebrahimiarjestan, M.; Wang, G.

    2017-07-01

    Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.

  16. Determining decoupling points in a supply chain networks using NSGA II algorithm

    International Nuclear Information System (INIS)

    Ebrahimiarjestan, M.; Wang, G.

    2017-01-01

    Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.

  17. Minimally invasive surgical approach to treat posterior urethral diverticulum

    Directory of Open Access Journals (Sweden)

    Ossamah Alsowayan

    2015-01-01

    Full Text Available Urethral diverticulum is a localized saccular or fusiform out-pouching of the urethra. It may occur at any point along the urethra in both male and females. Male urethral diverticulum is rare, and could be either congenital or acquired, anterior or posterior. The mainstay treatment of posterior urethral diverticulum (PUD is the open surgical approach. Here we discuss our minimally invasive surgical approach (MIS in managing posterior urethral diverticulum.

  18. Minimally invasive orthognathic surgery.

    Science.gov (United States)

    Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

    2009-02-01

    Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

  19. Probabilistic multiobjective wind-thermal economic emission dispatch based on point estimated method

    International Nuclear Information System (INIS)

    Azizipanah-Abarghooee, Rasoul; Niknam, Taher; Roosta, Alireza; Malekpour, Ahmad Reza; Zare, Mohsen

    2012-01-01

    In this paper, wind power generators are being incorporated in the multiobjective economic emission dispatch problem which minimizes wind-thermal electrical energy cost and emissions produced by fossil-fueled power plants, simultaneously. Large integration of wind energy sources necessitates an efficient model to cope with uncertainty arising from random wind variation. Hence, a multiobjective stochastic search algorithm based on 2m point estimated method is implemented to analyze the probabilistic wind-thermal economic emission dispatch problem considering both overestimation and underestimation of available wind power. 2m point estimated method handles the system uncertainties and renders the probability density function of desired variables efficiently. Moreover, a new population-based optimization algorithm called modified teaching-learning algorithm is proposed to determine the set of non-dominated optimal solutions. During the simulation, the set of non-dominated solutions are kept in an external memory (repository). Also, a fuzzy-based clustering technique is implemented to control the size of the repository. In order to select the best compromise solution from the repository, a niching mechanism is utilized such that the population will move toward a smaller search space in the Pareto-optimal front. In order to show the efficiency and feasibility of the proposed framework, three different test systems are represented as case studies. -- Highlights: ► WPGs are being incorporated in the multiobjective economic emission dispatch problem. ► 2m PEM handles the system uncertainties. ► A MTLBO is proposed to determine the set of non-dominated (Pareto) optimal solutions. ► A fuzzy-based clustering technique is implemented to control the size of the repository.

  20. Adjustment guidance for cyclotron by real-time display of feasible setting regions

    International Nuclear Information System (INIS)

    Okamura, Tetsuya; Murakami, Tohru

    1990-01-01

    A computer aided operation system for start-up of cyclotron is being developed in order to support operators who, through their experiences and intuition, adjust dozens of components to maximize extracted beam current. This paper describes a guidance method using real-time display of feasible setting regions of adjustment parameters. It is a function of the beam adjustment support system. The followings are the key points of this paper. (1) It is proposed that a cyclotron can be modeled as a series of mapping of beam condition. In this model, the adjustment is consider to be a searching process for a mapping which maps the beam condition into the acceptance of cyclotron. (2) The searching process is formulated as a nonlinear minimization problem. In order to solve this problem, a fast search algorithm composed of a line search method (golden section search) and an image processing method (border following) is developed. The solutions are the feasible setting regions. (3) A human interface which displays feasible setting regions and a search history is realized for the beam adjustment support system It enables that the operators and the computers cooperate the operation of beam adjustment. (author)

  1. Accuracy of linear drilling in temporal bone using drill press system for minimally invasive cochlear implantation.

    Science.gov (United States)

    Dillon, Neal P; Balachandran, Ramya; Labadie, Robert F

    2016-03-01

    A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of [Formula: see text] and higher as well as longer cantilevered drill lengths. The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure.

  2. Southeast Alaska ESI: FISHPT (Fish Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains biological resource data for anadromous fish streams in Southeast Alaska. Vector points in this data set represent locations of fish streams....

  3. Essay Prompts and Topics: Minimizing the Effect of Mean Differences.

    Science.gov (United States)

    Brown, James Dean; And Others

    1991-01-01

    Investigates whether prompts and topic types affect writing performance of college freshmen taking the Manoa Writing Placement Examination (MWPE). Finds that the MWPE is reliable but that responses to prompts and prompt sets differ. Shows that differences arising in performance on prompts or topics can be minimized by examining mean scores and…

  4. Correlation Functions in Holographic Minimal Models

    CERN Document Server

    Papadodimas, Kyriakos

    2012-01-01

    We compute exact three and four point functions in the W_N minimal models that were recently conjectured to be dual to a higher spin theory in AdS_3. The boundary theory has a large number of light operators that are not only invisible in the bulk but grow exponentially with N even at small conformal dimensions. Nevertheless, we provide evidence that this theory can be understood in a 1/N expansion since our correlators look like free-field correlators corrected by a power series in 1/N . However, on examining these corrections we find that the four point function of the two bulk scalar fields is corrected at leading order in 1/N through the contribution of one of the additional light operators in an OPE channel. This suggests that, to correctly reproduce even tree-level correlators on the boundary, the bulk theory needs to be modified by the inclusion of additional fields. As a technical by-product of our analysis, we describe two separate methods -- including a Coulomb gas type free-field formalism -- that ...

  5. Columbia River ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for bird nesting sites in the Columbia River area. Vector points in this data set represent locations of...

  6. Louisiana ESI: SOCECON (Socioeconomic Resource Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for airport, heliport, marina, and boat ramp locations in Louisiana. Vector points in this data set represent the...

  7. 6d dual conformal symmetry and minimal volumes in AdS

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Jyotirmoy; Lipstein, Arthur E. [Centre for Particle Theory & Department of Mathematical Sciences, Durham University,South Road, Durham DH1 3LE (United Kingdom)

    2016-12-20

    The S-matrix of a theory often exhibits symmetries which are not manifest from the viewpoint of its Lagrangian. For instance, powerful constraints on scattering amplitudes are imposed by the dual conformal symmetry of planar 4d N=4 super Yang-Mills theory and the ABJM theory. Motivated by this, we investigate the consequences of dual conformal symmetry in six dimensions, which may provide useful insight into the worldvolume theory of M5-branes (if it enjoys such a symmetry). We find that 6d dual conformal symmetry uniquely fixes the integrand of the one-loop 4-point amplitude, and its structure suggests a Lagrangian with more than two derivatives. On integrating out the loop momentum in 6−2ϵ dimensions, the result is very similar to the corresponding amplitude of N=4 super Yang-Mills theory. We confirm this result holographically by generalizing the Alday-Maldacena solution for a minimal area string in Anti-de Sitter space to a minimal volume M2-brane ending on a pillow-shaped surface in the boundary whose seams correspond to a null-polygon. This involves careful treatment of a prefactor which diverges as 1/ϵ, and we comment on its possible interpretation. We also study 2-loop 4-point integrands with 6d dual conformal symmetry and speculate on the existence of an all-loop formula for the 4-point amplitude.

  8. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  9. Minimal surfaces, stratified multivarifolds, and the plateau problem

    CERN Document Server

    Thi, Dao Trong; Primrose, E J F; Silver, Ben

    1991-01-01

    Plateau's problem is a scientific trend in modern mathematics that unites several different problems connected with the study of minimal surfaces. In its simplest version, Plateau's problem is concerned with finding a surface of least area that spans a given fixed one-dimensional contour in three-dimensional space--perhaps the best-known example of such surfaces is provided by soap films. From the mathematical point of view, such films are described as solutions of a second-order partial differential equation, so their behavior is quite complicated and has still not been thoroughly studied. Soap films, or, more generally, interfaces between physical media in equilibrium, arise in many applied problems in chemistry, physics, and also in nature. In applications, one finds not only two-dimensional but also multidimensional minimal surfaces that span fixed closed "contours" in some multidimensional Riemannian space. An exact mathematical statement of the problem of finding a surface of least area or volume requir...

  10. Minimally extended SILH

    International Nuclear Information System (INIS)

    Chala, Mikael; Grojean, Christophe; Humboldt-Univ. Berlin; Lima, Leonardo de; Univ. Estadual Paulista, Sao Paulo

    2017-03-01

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  11. Minimally extended SILH

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Valencia Univ. (Spain). Dept. de Fisica Teorica y IFIC; Durieux, Gauthier; Matsedonskyi, Oleksii [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Grojean, Christophe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Humboldt-Univ. Berlin (Germany). Inst. fuer Physik; Lima, Leonardo de [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Univ. Estadual Paulista, Sao Paulo (Brazil). Inst. de Fisica Teorica

    2017-03-15

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  12. Geodetic Control Points - MO 2014 Springfield Benchmarks (SHP)

    Data.gov (United States)

    NSGIC State | GIS Inventory — Points that show set benchmark or survey control locations in the City of Springfield. Many of these points are PLS section corners and quarter corners. These points...

  13. Correlates of minimal dating.

    Science.gov (United States)

    Leck, Kira

    2006-10-01

    Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

  14. emMAW: computing minimal absent words in external memory.

    Science.gov (United States)

    Héliou, Alice; Pissis, Solon P; Puglisi, Simon J

    2017-09-01

    The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n  bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Periodic-cylinder vesicle with minimal energy

    International Nuclear Information System (INIS)

    Xiao-Hua, Zhou

    2010-01-01

    We give some details about the periodic cylindrical solution found by Zhang and Ou-Yang in [1996 Phys. Rev. E 53 4206] for the general shape equation of vesicle. Three different kinds of periodic cylindrical surfaces and a special closed cylindrical surface are obtained. Using the elliptic functions contained in mathematic, we find that this periodic shape has the minimal total energy for one period when the period–amplitude ratio β ≈ 1.477, and point out that it is a discontinuous deformation between plane and this periodic shape. Our results also are suitable for DNA and multi-walled carbon nanotubes (MWNTs). (cross-disciplinary physics and related areas of science and technology)

  16. Utility of the point of care CD4 analyzer, PIMA, to enumerate CD4 counts in the field settings in India

    Directory of Open Access Journals (Sweden)

    Thakar Madhuri

    2012-09-01

    Full Text Available Abstract Background In resource limited settings non-availability of CD4 count facility at the site could adversely affect the ART roll out programme. Point of care CD4 enumerating equipments can make the CD4 count available at the site of care and improve the patients’ management considerably. This study is aimed at determining the utility of a Point of Care PIMA CD4 analyzer (Alere, Germany in the field settings in India. Method The blood samples were collected from 1790 participants at 21 ART centers from different parts of the country and tested using PIMA and the reference methods (FACSCalibur, FACSCount and CyFlow SL3. The paired finger prick and venous blood samples from 175 participants were tested by the PIMA CD4 Analyzer and then by FACSCalibur. Result The CD4 counts obtained by PIMA CD4 analyzer showed excellent correlation with the counts obtained by the reference methods; for venous blood the Pearson’s r was 0.921, p 500 cells/mm3, the differences in the median CD4 counts obtained by the reference method and the PIMA analyzer were not significant (P > 0.05 and the relative bias were low (−7 to 5.1%. The Intermachine comparison showed variation within the acceptable limit of%CV of 10%. Conclusion In the field settings, the POC PIMA CD4 analyzer gave CD4 counts comparable to the reference methods for all CD4 ranges. The POC equipment could identify the patients eligible for ART in 91% cases. Adequate training is necessary for finger prick sample collection for optimum results. Decentralization of CD4 testing by making the CD4 counts available at primary health centers, especially in remote areas with minimum or no infrastructure would reduce the missed visits and improve adherence of the patients.

  17. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  18. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Science.gov (United States)

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  19. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Directory of Open Access Journals (Sweden)

    Shih-Wei Lin

    2014-01-01

    Full Text Available Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP, which aims to minimize total service time, and proposes an iterated greedy (IG algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.

  20. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: Application to SSSH

    Science.gov (United States)

    Kolmann, Stephen J.; Jordan, Meredith J. T.

    2010-02-01

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol-1 at the CCSD(T)/6-31G∗ level of theory, has a 4 kJ mol-1 dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol-1 lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol-1 lower in energy at the CCSD(T)/6-31G∗ level of theory. Ideally, for sub-kJ mol-1 thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  1. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: application to SSSH.

    Science.gov (United States)

    Kolmann, Stephen J; Jordan, Meredith J T

    2010-02-07

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol(-1) at the CCSD(T)/6-31G* level of theory, has a 4 kJ mol(-1) dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol(-1) lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol(-1) lower in energy at the CCSD(T)/6-31G* level of theory. Ideally, for sub-kJ mol(-1) thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  2. Maximal translational equivalence classes of musical patterns in point-set representations

    DEFF Research Database (Denmark)

    Collins, Tom; Meredith, David

    2013-01-01

    Representing musical notes as points in pitch-time space causes repeated motives and themes to appear as translationally related patterns that often correspond to maximal translatable patterns (MTPs). However, an MTP is also often the union of a salient pattern with one or two temporally isolated...

  3. Implementation and automated validation of the minimal Z' model in FeynRules

    International Nuclear Information System (INIS)

    Basso, L.; Christensen, N.D.; Duhr, C.; Fuks, B.; Speckner, C.

    2012-01-01

    We describe the implementation of a well-known class of U(1) gauge models, the 'minimal' Z' models, in FeynRules. We also describe a new automated validation tool for FeynRules models which is controlled by a web interface and allows the user to run a complete set of 2 → 2 processes on different matrix element generators, different gauges, and compare between them all. If existing, the comparison with independent implementations is also possible. This tool has been used to validate our implementation of the 'minimal' Z' models. (authors)

  4. Geometry of convex polygons and locally minimal binary trees spanning these polygons

    International Nuclear Information System (INIS)

    Ivanov, A O; Tuzhilin, A A

    1999-01-01

    In previous works the authors have obtained an effective classification of planar locally minimal binary trees with convex boundaries. The main aim of the present paper is to find more subtle restrictions on the possible structure of such trees in terms of the geometry of the given boundary set. Special attention is given to the case of quasiregular boundaries (that is, boundaries that are sufficiently close to regular ones in a certain sense). In particular, a series of quasiregular boundaries that cannot be spanned by a locally minimal binary tree is constructed

  5. Of Minima and Maxima: The Social Significance of Minimal Competency Testing and the Search for Educational Excellence.

    Science.gov (United States)

    Ericson, David P.

    1984-01-01

    Explores the many meanings of the minimal competency testing movement and the more recent mobilization for educational excellence in the schools. Argues that increasing the value of the diploma by setting performance standards on minimal competency tests and by elevating academic graduation standards may strongly conflict with policies encouraging…

  6. A 34-meter VAWT (Vertical Axis Wind Turbine) point design

    Science.gov (United States)

    Ashwill, T. D.; Berg, D. E.; Dodd, H. M.; Rumsey, M. A.; Sutherland, H. J.; Veers, P. S.

    The Wind Energy Division at Sandia National Laboratories recently completed a point design based on the 34-m Vertical Axis Wind Turbine (VAWT) Test Bed. The 34-m Test Bed research machine incorporates several innovations that improve Darrieus technology, including increased energy production, over previous machines. The point design differs minimally from the Test Bed; but by removing research-related items, its estimated cost is substantially reduced. The point design is a first step towards a Test-Bed-based commercial machine that would be competitive with conventional sources of power in the mid-1990s.

  7. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    International Nuclear Information System (INIS)

    Qi Pei-Han; Zheng Shi-Lian; Yang Xiao-Niu; Zhao Zhi-Jin

    2016-01-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. (paper)

  8. Minimally invasive superficial temporal artery to middle cerebral artery bypass through a minicraniotomy: benefit of three-dimensional virtual reality planning using magnetic resonance angiography.

    Science.gov (United States)

    Fischer, Gerrit; Stadie, Axel; Schwandt, Eike; Gawehn, Joachim; Boor, Stephan; Marx, Juergen; Oertel, Joachim

    2009-05-01

    The aim of the authors in this study was to introduce a minimally invasive superficial temporal artery to middle cerebral artery (STA-MCA) bypass surgery by the preselection of appropriate donor and recipient branches in a 3D virtual reality setting based on 3-T MR angiography data. An STA-MCA anastomosis was performed in each of 5 patients. Before surgery, 3-T MR imaging was performed with 3D magnetization-prepared rapid acquisition gradient echo sequences, and a high-resolution CT 3D dataset was obtained. Image fusion and the construction of a 3D virtual reality model of each patient were completed. In the 3D virtual reality setting, the skin surface, skull surface, and extra- and intracranial arteries as well as the cortical brain surface could be displayed in detail. The surgical approach was successfully visualized in virtual reality. The anatomical relationship of structures of interest could be evaluated based on different values of translucency in all cases. The closest point of the appropriate donor branch of the STA and the most suitable recipient M(3) or M(4) segment could be calculated with high accuracy preoperatively and determined as the center point of the following minicraniotomy. Localization of the craniotomy and the skin incision on top of the STA branch was calculated with the system, and these data were transferred onto the patient's skin before surgery. In all cases the preselected arteries could be found intraoperatively in exact agreement with the preoperative planning data. Successful extracranial-intracranial bypass surgery was achieved without stereotactic neuronavigation via a preselected minimally invasive approach in all cases. Subsequent enlargement of the craniotomy was not necessary. Perioperative complications were not observed. All bypasses remained patent on follow-up. With the application of a 3D virtual reality planning system, the extent of skin incision and tissue trauma as well as the size of the bone flap was minimal. The

  9. Non-minimally coupled quintessence dark energy model with a cubic galileon term: a dynamical system analysis

    Science.gov (United States)

    Bhattacharya, Somnath; Mukherjee, Pradip; Roy, Amit Singha; Saha, Anirban

    2018-03-01

    We consider a scalar field which is generally non-minimally coupled to gravity and has a characteristic cubic Galilean-like term and a generic self-interaction, as a candidate of a Dark Energy model. The system is dynamically analyzed and novel fixed points with perturbative stability are demonstrated. Evolution of the system is numerically studied near a novel fixed point which owes its existence to the Galileon character of the model. It turns out that demanding the stability of this novel fixed point puts a strong restriction on the allowed non-minimal coupling and the choice of the self-interaction. The evolution of the equation of state parameter is studied, which shows that our model predicts an accelerated universe throughout and the phantom limit is only approached closely but never crossed. Our result thus extends the findings of Coley, Dynamical systems and cosmology. Kluwer Academic Publishers, Boston (2013) for more general NMC than linear and quadratic couplings.

  10. Minimizing the Effect of Substantial Perturbations in Military Water Systems for Increased Resilience and Efficiency

    Directory of Open Access Journals (Sweden)

    Corey M. James

    2017-10-01

    Full Text Available A model predictive control (MPC framework, exploiting both feedforward and feedback control loops, is employed to minimize large disturbances that occur in military water networks. Military installations’ need for resilient and efficient water supplies is often challenged by large disturbances like fires, terrorist activity, troop training rotations, and large scale leaks. This work applies the effectiveness of MPC to provide predictive capability and compensate for vast geographical differences and varying phenomena time scales using computational software and actual system dimensions and parameters. The results show that large disturbances are rapidly minimized while maintaining chlorine concentration within legal limits at the point of demand and overall water usage is minimized. The control framework also ensures pumping is minimized during peak electricity hours, so costs are kept lower than simple proportional control. Thecontrol structure implemented in this work is able to support resiliency and increased efficiency on military bases by minimizing tank holdup, effectively countering large disturbances, and efficiently managing pumping.

  11. Data Point Averaging for Computational Fluid Dynamics Data

    Science.gov (United States)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  12. Gravity and matter in causal set theory

    International Nuclear Information System (INIS)

    Sverdlov, Roman; Bombelli, Luca

    2009-01-01

    The goal of this paper is to propose an approach to the formulation of dynamics for causal sets and coupled matter fields. We start from the continuum version of the action for a Klein-Gordon field coupled to gravity, and rewrite it first using quantities that have a direct correspondent in the case of a causal set, namely volumes, causal relations and timelike lengths, as variables to describe the geometry. In this step, the local Lagrangian density L(f;x) for a set of fields f is recast into a quasilocal expression L 0 (f;p,q) that depends on pairs of causally related points pprq and is a function of the values of f in the Alexandrov set defined by those points, and whose limit as p and q approach a common point x is L(f;x). We then describe how to discretize L 0 (f;p,q) and use it to define a causal-set-based action.

  13. Genetics of the gravitropic set-point angle in lateral organs of Arabidopsis

    Science.gov (United States)

    Mullen, J.; Hangarter, R.

    Research on gravity responses in plants has mostly focused on primary roots and shoots, which typically orient to a vertical orientation. However, the distribution of lateral organs and their typically non-vertical growth orientation are critical for the determination of plant form. For example, in Arabidopsis, when lateral roots emerge from the primary root, they grow at a nearly horizontal orientation. As they elongate, the roots slowly curve until they eventually reach a vertical orientation. The regulation of this lateral root orientation is an important component affecting the overall root system architecture. We found that this change in orientation is not simply due to the onset of gravitropic competence, as non-vertical lateral roots are capable of both positive and negative gravitropism. Thus, the horizontal growth the new lateral roots is determined by what is called the gravitropic set-point angle (GSA). This developmental control of the GSA of lateral roots in Arabidopsis provides a useful system for investigating the components involved in regulating gravitropic responses. Using this system, we have identified several Arabidopsis mutants that have altered lateral root orientations but maintain normal primary root orientation. Two of these mutants also have altered orientation of their rosette leaves, indicating some common mechanisms in the positioning of root and shoot lateral organs. Rosette leaves and lateral roots also have in common a regulation of orientation by red light that may be due to red-light-dependent changes in the GSA. Further molecular and physiological analyses of the GSA mutants will provide insight into the basis of GSA regulation and, thus, a better understanding of how gravity controls plant architecture. [This work was supported by the National Aeronautics and Space Administration through grant no. NCC 2-1200.

  14. Basic set theory

    CERN Document Server

    Levy, Azriel

    2002-01-01

    An advanced-level treatment of the basics of set theory, this text offers students a firm foundation, stopping just short of the areas employing model-theoretic methods. Geared toward upper-level undergraduate and graduate students, it consists of two parts: the first covers pure set theory, including the basic motions, order and well-foundedness, cardinal numbers, the ordinals, and the axiom of choice and some of it consequences; the second deals with applications and advanced topics such as point set topology, real spaces, Boolean algebras, and infinite combinatorics and large cardinals. An

  15. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  16. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  17. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  18. Minimizing Mutual Couping

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

  19. Naturalness reach of the large hadron collider in minimal supergravity

    International Nuclear Information System (INIS)

    Allanach, B.C.; Hetherington, J.P.J.; Parker, M.A.; Webber, B.R.

    2000-01-01

    We re-analyse the prospects of discovering supersymmetry at the LHC, in order to re-express coverage in terms of a fine-tuning parameter and to extend the analysis to scalar masses (m 0 ) above 2 TeV. We use minimal supergravity (mSUGRA) unification assumptions for the SUSY breaking parameters. Such high values of m 0 have recently been found to have a focus point, leading to relatively low fine-tuning. In addition, improvements in the simulations since the last study mean that this region no longer lacks radiative electroweak symmetry breaking. The best fine tuning reach is found in a mono-leptonic channel, where for μ>0, A 0 =0 and tan β=10 (corresponding to the focus point), all points in mSUGRA with m 0 0 , mSUGRA does not evade detection provided the gaugino mass parameter M 1/2 < 460 GeV. (author)

  20. Environmental Assessment: Gulf Power Company Military Point Transmission Line Project

    Science.gov (United States)

    2014-05-12

    1500.4. 1.4. 1 Geology No geological hazards, seismic risks, or unstable slopes occur in the vicinity of Military Point on Tyndall AFB. Therefore, the...with sand grout to fill any voids within the casing and provide thermal insulation for the electric cable. Lengths greater than 3,000 feet would be...devices designed to either insulate or isolate potential contact points have. been recommended to minimize the risk of eagle electrocution, which are

  1. Point cloud data management (extended abstract)

    NARCIS (Netherlands)

    Van Oosterom, P.J.M.; Ravada, S.; Horhammer, M.; Martinez Rubi, O.; Ivanova, M.; Kodde, M.; Tijssen, T.P.M.

    2014-01-01

    Point cloud data are important sources for 3D geo-information. The point cloud data sets are growing in popularity and in size. Modern Big Data acquisition and processing technologies, such as laser scanning from airborne, mobile, or static platforms, dense image matching from photos, multi-beam

  2. A strategy to find minimal energy nanocluster structures.

    Science.gov (United States)

    Rogan, José; Varas, Alejandro; Valdivia, Juan Alejandro; Kiwi, Miguel

    2013-11-05

    An unbiased strategy to search for the global and local minimal energy structures of free standing nanoclusters is presented. Our objectives are twofold: to find a diverse set of low lying local minima, as well as the global minimum. To do so, we use massively the fast inertial relaxation engine algorithm as an efficient local minimizer. This procedure turns out to be quite efficient to reach the global minimum, and also most of the local minima. We test the method with the Lennard-Jones (LJ) potential, for which an abundant literature does exist, and obtain novel results, which include a new local minimum for LJ13 , 10 new local minima for LJ14 , and thousands of new local minima for 15≤N≤65. Insights on how to choose the initial configurations, analyzing the effectiveness of the method in reaching low-energy structures, including the global minimum, are developed as a function of the number of atoms of the cluster. Also, a novel characterization of the potential energy surface, analyzing properties of the local minima basins, is provided. The procedure constitutes a promising tool to generate a diverse set of cluster conformations, both two- and three-dimensional, that can be used as an input for refinement by means of ab initio methods. Copyright © 2013 Wiley Periodicals, Inc.

  3. Risk Minimization for Insurance Products via F-Doubly Stochastic Markov Chains

    Directory of Open Access Journals (Sweden)

    Francesca Biagini

    2016-07-01

    Full Text Available We study risk-minimization for a large class of insurance contracts. Given that the individual progress in time of visiting an insurance policy’s states follows an F -doubly stochastic Markov chain, we describe different state-dependent types of insurance benefits. These cover single payments at maturity, annuity-type payments and payments at the time of a transition. Based on the intensity of the F -doubly stochastic Markov chain, we provide the Galtchouk-Kunita-Watanabe decomposition for a general insurance contract and specify risk-minimizing strategies in a Brownian financial market setting. The results are further illustrated explicitly within an affine structure for the intensity.

  4. Mixed-order phase transition in a minimal, diffusion-based spin model.

    Science.gov (United States)

    Fronczak, Agata; Fronczak, Piotr

    2016-07-01

    In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.

  5. A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Börcs

    2012-07-01

    Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.

  6. 3D reconstruction of laser projective point with projection invariant generated from five points on 2D target.

    Science.gov (United States)

    Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian

    2017-08-01

    Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.

  7. Heuristics for minimizing the maximum within-clusters distance

    Directory of Open Access Journals (Sweden)

    José Augusto Fioruci

    2012-12-01

    Full Text Available The clustering problem consists in finding patterns in a data set in order to divide it into clusters with high within-cluster similarity. This paper presents the study of a problem, here called MMD problem, which aims at finding a clustering with a predefined number of clusters that minimizes the largest within-cluster distance (diameter among all clusters. There are two main objectives in this paper: to propose heuristics for the MMD and to evaluate the suitability of the best proposed heuristic results according to the real classification of some data sets. Regarding the first objective, the results obtained in the experiments indicate a good performance of the best proposed heuristic that outperformed the Complete Linkage algorithm (the most used method from the literature for this problem. Nevertheless, regarding the suitability of the results according to the real classification of the data sets, the proposed heuristic achieved better quality results than C-Means algorithm, but worse than Complete Linkage.

  8. Hinkley Point 'C' power station public inquiry: proof of evidence on comparison of non-fossil options to Hinkley Point 'C'

    International Nuclear Information System (INIS)

    Goddard, S.C.

    1988-09-01

    A public inquiry has been set up to examine the planning application made by the Central Electricity Generating Board (CEGB) for the construction of a 1200 MW Pressurized Water Reactor power station at Hinkley Point (Hinkley Point ''C'') in the United Kingdom. This evidence to the Inquiry sets out and explains the non-fossil fuel options, with particular reference to renewable energy sources and other PWR locations; gives feasibility, capital cost, performance and total resource estimates for the renewable sources; and shows that no other non-fossil fuel source is to be preferred to Hinkley Point ''C''. (author)

  9. SharePoint 2007 Collaboration For Dummies

    CERN Document Server

    Harvey, Greg

    2009-01-01

    If you're looking for a way to help your teams access what they need to know, work together, and get the job done, SharePoint can do just that. SharePoint 2007 Collaboration For Dummies shows you the easiest way to set up and customize SharePoint, manage your data, interact using SharePoint blogs and wikis, integrate Office programs, and make your office more productive. You'll learn what SharePoint can do and how to make it work for your business, understand the technical terms, and enable your people to collaborate on documents and spreadsheets. You'll even discover how to get SharePoint hel

  10. Modular differential equations for torus one-point functions

    International Nuclear Information System (INIS)

    Gaberdiel, Matthias R; Lang, Samuel

    2009-01-01

    It is shown that in a rational conformal field theory every torus one-point function of a given highest weight state satisfies a modular differential equation. We derive and solve these differential equations explicitly for some Virasoro minimal models. In general, however, the resulting amplitudes do not seem to be expressible in terms of standard transcendental functions

  11. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  12. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  13. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  14. Minimizing employee exposure to toxic chemical releases

    International Nuclear Information System (INIS)

    Plummer, R.W.; Stobbe, T.J.; Mogensen, J.E.; Jeram, L.K.

    1987-01-01

    This book describes procedures for minimizing employee exposure to toxic chemical releases and suggested personal protective equipment (PPE) to be used in the event of such chemical release. How individuals, employees, supervisors, or companies perceive the risks of chemical exposure (risk meaning both probability of exposure and effect of exposure) determines to a great extent what precautions are taken to avoid risk. In Part I, the authors develop and approach which divides the project into three phases: kinds of procedures currently being used; the types of toxic chemical release accidents and injuries that occur; and, finally, integration of this information into a set of recommended procedures which should decrease the likelihood of a toxic chemical release and, if one does occur, will minimize the exposure and its severity to employees. Part II covers the use of personal protective equipment. It addresses the questions: what personal protective equipment ensembles are used in industry in situations where the release of a toxic or dangerous chemical may occur or has occurred; and what personal protective equipment ensembles should be used in these situations

  15. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    Science.gov (United States)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of

  16. Hawaii ESI: M_MAMPT (Marine Mammal Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for endangered Hawaiian monk seal pupping and haul-out sites. Vector points in this data set represent...

  17. Visual Communication in PowerPoint Presentations in Applied Linguistics

    Science.gov (United States)

    Kmalvand, Ayad

    2014-01-01

    PowerPoint knowledge presentation as a digital genre has established itself as the main software by which the findings of theses are disseminated in the academic settings. Although the importance of PowerPoint presentations is typically realized in academic settings like lectures, conferences, and seminars, the study of the visual features of…

  18. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    Science.gov (United States)

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  19. OxMaR: open source free software for online minimization and randomization for clinical trials.

    Science.gov (United States)

    O'Callaghan, Christopher A

    2014-01-01

    Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.

  20. OxMaR: open source free software for online minimization and randomization for clinical trials.

    Directory of Open Access Journals (Sweden)

    Christopher A O'Callaghan

    Full Text Available Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.

  1. Optimizing Geographic Allotment of Photovoltaic Capacity in a Distributed Generation Setting: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Urquhart, B.; Sengupta, M.; Keller, J.

    2012-09-01

    A multi-objective optimization was performed to allocate 2MW of PV among four candidate sites on the island of Lanai such that energy was maximized and variability in the form of ramp rates was minimized. This resulted in an optimal solution set which provides a range of geographic allotment alternatives for the fixed PV capacity. Within the optimal set, a tradeoff between energy produced and variability experienced was found, whereby a decrease in variability always necessitates a simultaneous decrease in energy. A design point within the optimal set was selected for study which decreased extreme ramp rates by over 50% while only decreasing annual energy generation by 3% over the maximum generation allocation. To quantify the allotment mix selected, a metric was developed, called the ramp ratio, which compares ramping magnitude when all capacity is allotted to a single location to the aggregate ramping magnitude in a distributed scenario. The ramp ratio quantifies simultaneously how much smoothing a distributed scenario would experience over single site allotment and how much a single site is being under-utilized for its ability to reduce aggregate variability. This paper creates a framework for use by cities and municipal utilities to reduce variability impacts while planning for high penetration of PV on the distribution grid.

  2. Refining Lane-Based Traffic Signal Settings to Satisfy Spatial Lane Length Requirements

    Directory of Open Access Journals (Sweden)

    Yanping Liu

    2017-01-01

    Full Text Available In conventional lane-based signal optimization models, lane markings guiding road users in making turns are optimized with traffic signal settings in a unified framework to maximize the overall intersection capacity or minimize the total delay. The spatial queue requirements of road lanes should be considered to avoid overdesigns of green durations. Point queue system adopted in the conventional lane-based framework causes overflow in practice. Based on the optimization results from the original lane-based designs, a refinement is proposed to enhance the lane-based settings to ensure that spatial holding limits of the approaching traffic lanes are not exceeded. A solution heuristic is developed to modify the green start times, green durations, and cycle length by considering the vehicle queuing patterns and physical holding capacities along the approaching traffic lanes. To show the effectiveness of this traffic signal refinement, a case study of one of the busiest and most complicated intersections in Hong Kong is given for demonstration. A site survey was conducted to collect existing traffic demand patterns and existing traffic signal settings in peak periods. Results show that the proposed refinement method is effective to ensure that all vehicle queue lengths satisfy spatial lane capacity limits, including short lanes, for daily operation.

  3. Confocal fluorescence microscopy for minimal-invasive tumor diagnosis

    International Nuclear Information System (INIS)

    Zenzinger, M.; Bille, J.

    2000-01-01

    The goal of the project ''stereotactic laser-neurosurgery'' is the development of a system for careful and minimal-invasive resection of brain tumors with ultrashort laser pulses through a thin probe. A confocal laser-scanning-microscope is integrated in the probe. In this paper, the simulation of its optical properties by a laboratory setup and the expansion by the ability for fluorescence microscopy are reported. For a valuation of the imaging properties, the point-spread-function in three dimensions and the axial depth-transfer-function were measured and thus, among other things, the resolving power and the capacity for depth discrimination were analysed. The microscope will enable intra-operative detection of tumor cells by the method of immunofluorescence. As a first model of the application in the brain, cell cultures, that fluorescein-labelled antibodies were bound to specifically, were used in this work. Due to the fluorescence signal, it was possible to detect and identify clearly the areas that had been marked in this manner, proving the suitability of the setup for minimal-invasive tumor diagnosis. (orig.)

  4. Selection of coplanar or noncoplanar beams using three-dimensional optimization based on maximum beam separation and minimized nontarget irradiation

    International Nuclear Information System (INIS)

    Das, Shiva K.; Marks, Lawrence B.

    1997-01-01

    Purpose: The design of an appropriate set of multiple fixed fields to achieve a steep dose gradient at the tumor edge, with minimal normal tissue exposure, is a very difficult problem, since a virtually infinite number of possible beam orientations exists. In practice we have selected beams in an iterative and often time-consuming process. This work proposes an optimization method, based on geometric and dose elements, to effectively arrive at a set of beam orientations. Methods and Materials: Beams are selected by minimizing a goal function including an angle function (beam separation for steep dose gradient at target edge) and a length function (related to normal tissue dose volume histogram). The relative importance of these two factors may be adjusted depending on the clinic situation. The model is flexible and can include case specific practical anatomic and physical considerations. Results: In extremely simple situations, the goal function yields results consistent with well-known analytical solutions. When applied to more complex clinical situations, it provides clinically reasonable solutions similar to those empirically developed by the clinician. The optimization process takes approximately 25 min on a UNIX workstation. Conclusion: The optimization scheme provides a practical means for rapidly designing multiple field coplanar or noncoplanar treatments. It overcomes limitations in human three-dimensional visualization such as trying to visualize beam directions and keeping track of the hinge angle between beams while accounting for anatomic/machine constraints. In practice, it has been used as a starting point for physicians to make modifications, based on their clinical judgment

  5. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming

    2014-08-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  6. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming; Guo, Jianwei; Jia, Xiaohong; Zhang, Xiaopeng; Wonka, Peter

    2014-01-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  7. Iterative Schemes for Convex Minimization Problems with Constraints

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We first introduce and analyze one implicit iterative algorithm for finding a solution of the minimization problem for a convex and continuously Fréchet differentiable functional, with constraints of several problems: the generalized mixed equilibrium problem, the system of generalized equilibrium problems, and finitely many variational inclusions in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another implicit iterative algorithm for finding a fixed point of infinitely many nonexpansive mappings with the same constraints, and derive its strong convergence under mild assumptions.

  8. Minimization of number of setups for mounting machines

    Energy Technology Data Exchange (ETDEWEB)

    Kolman, Pavel; Nchor, Dennis; Hampel, David [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, 603 00 Brno (Czech Republic); Žák, Jaroslav [Institute of Technology and Business, Okružní 517/10, 370 01 České Budejovice (Czech Republic)

    2015-03-10

    The article deals with the problem of minimizing the number of setups for mounting SMT machines. SMT is a device used to assemble components on printed circuit boards (PCB) during the manufacturing of electronics. Each type of PCB has a different set of components, which are obligatory. Components are placed in the SMT tray. The problem consists in the fact that the total number of components used for all products is greater than the size of the tray. Therefore, every change of manufactured product requires a complete change of components in the tray (i.e., a setup change). Currently, the number of setups corresponds to the number of printed circuit board type. Any production change affects the change of setup and stops production on one shift. Many components occur in more products therefore the question arose as to how to deploy the products into groups so as to minimize the number of setups. This would result in a huge increase in efficiency of production.

  9. Minimizing off-Target Mutagenesis Risks Caused by Programmable Nucleases.

    Science.gov (United States)

    Ishida, Kentaro; Gee, Peter; Hotta, Akitsu

    2015-10-16

    Programmable nucleases, such as zinc finger nucleases (ZFNs), transcription activator like effector nucleases (TALENs), and clustered regularly interspersed short palindromic repeats associated protein-9 (CRISPR-Cas9), hold tremendous potential for applications in the clinical setting to treat genetic diseases or prevent infectious diseases. However, because the accuracy of DNA recognition by these nucleases is not always perfect, off-target mutagenesis may result in undesirable adverse events in treated patients such as cellular toxicity or tumorigenesis. Therefore, designing nucleases and analyzing their activity must be carefully evaluated to minimize off-target mutagenesis. Furthermore, rigorous genomic testing will be important to ensure the integrity of nuclease modified cells. In this review, we provide an overview of available nuclease designing platforms, nuclease engineering approaches to minimize off-target activity, and methods to evaluate both on- and off-target cleavage of CRISPR-Cas9.

  10. Minimal average consumption downlink base station power control strategy

    OpenAIRE

    Holtkamp H.; Auer G.; Haas H.

    2011-01-01

    We consider single cell multi-user OFDMA downlink resource allocation on a flat-fading channel such that average supply power is minimized while fulfilling a set of target rates. Available degrees of freedom are transmission power and duration. This paper extends our previous work on power optimal resource allocation in the mobile downlink by detailing the optimal power control strategy investigation and extracting fundamental characteristics of power optimal operation in cellular downlink. W...

  11. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  12. Legal incentives for minimizing waste

    International Nuclear Information System (INIS)

    Clearwater, S.W.; Scanlon, J.M.

    1991-01-01

    Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

  13. Challenges and Opportunities of Centrifugal Microfluidics for Extreme Point-of-Care Testing

    Directory of Open Access Journals (Sweden)

    Issac J. Michael

    2016-02-01

    Full Text Available The advantages offered by centrifugal microfluidic systems have encouraged its rapid adaptation in the fields of in vitro diagnostics, clinical chemistry, immunoassays, and nucleic acid tests. Centrifugal microfluidic devices are currently used in both clinical and point-of-care settings. Recent studies have shown that this new diagnostic platform could be potentially used in extreme point-of-care settings like remote villages in the Indian subcontinent and in Africa. Several technological inventions have decentralized diagnostics in developing countries; however, very few microfluidic technologies have been successful in meeting the demand. By identifying the finest difference between the point-of-care testing and extreme point-of-care infrastructure, this review captures the evolving diagnostic needs of developing countries paired with infrastructural challenges with technological hurdles to healthcare delivery in extreme point-of-care settings. In particular, the requirements for making centrifugal diagnostic devices viable in developing countries are discussed based on a detailed analysis of the demands in different clinical settings including the distinctive needs of extreme point-of-care settings.

  14. The Use of Trust Regions in Kohn-Sham Total Energy Minimization

    International Nuclear Information System (INIS)

    Yang, Chao; Meza, Juan C.; Wang, Lin-wang

    2006-01-01

    The Self Consistent Field (SCF) iteration, widely used for computing the ground state energy and the corresponding single particle wave functions associated with a many-electron atomistic system, is viewed in this paper as an optimization procedure that minimizes the Kohn-Sham total energy indirectly by minimizing a sequence of quadratic surrogate functions. We point out the similarity and difference between the total energy and the surrogate, and show how the SCF iteration can fail when the minimizer of the surrogate produces an increase in the KS total energy. A trust region technique is introduced as a way to restrict the update of the wave functions within a small neighborhood of an approximate solution at which the gradient of the total energy agrees with that of the surrogate. The use of trust region in SCF is not new. However, it has been observed that directly applying a trust region based SCF(TRSCF) to the Kohn-Sham total energy often leads to slow convergence. We propose to use TRSCF within a direct constrained minimization(DCM) algorithm we developed in dcm. The key ingredients of the DCM algorithm involve projecting the total energy function into a sequence of subspaces of small dimensions and seeking the minimizer of the total energy function within each subspace. The minimizer of a subspace energy function, which is computed by TRSCF, not only provides a search direction along which the KS total energy function decreases but also gives an optimal 'step-length' that yields a sufficient decrease in total energy. A numerical example is provided to demonstrate that the combination of TRSCF and DCM is more efficient than SCF

  15. Minimal Residual Disease in Acute Myeloid Leukemia: Still a Work in Progress?

    Directory of Open Access Journals (Sweden)

    Federico Mosna

    2017-06-01

    Full Text Available Minimal residual disease evaluation refers to a series of molecular and immunophenotypical techniques aimed at detecting submicroscopic disease after therapy. As such, its application in acute myeloid leukemia has greatly increased our ability to quantify treatment response, and to determine the chemosensitivity of the disease, as the final product of the drug schedule, dose intensity, biodistribution, and the pharmakogenetic profile of the patient. There is now consistent evidence for the prognostic power of minimal residual disease evaluation in acute myeloid leukemia, which is complementary to the baseline prognostic assessment of the disease. The focus for its use is therefore shifting to individualize treatment based on a deeper evaluation of chemosensitivity and residual tumor burden. In this review, we will summarize the results of the major clinical studies evaluating minimal residual disease in acute myeloid leukemia in adults in recent years and address the technical and practical issues still hampering the spread of these techniques outside controlled clinical trials. We will also briefly speculate on future developments and offer our point of view, and a word of caution, on the present use of minimal residual disease measurements in “real-life” practice. Still, as final standardization and diffusion of the methods are sorted out, we believe that minimal residual disease will soon become the new standard for evaluating response in the treatment of acute myeloid leukemia.

  16. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  17. North Slope, Alaska ESI: FACILITY (Facility Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains data for oil field facilities for the North Slope of Alaska. Vector points in this data set represent oil field facility locations. This data...

  18. Sequential Convex Programming for Power Set-point Optimization in a Wind Farm using Black-box Models, Simple Turbine Interactions, and Integer Variables

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Jørgensen, John Bagterp

    2012-01-01

    We consider the optimization of power set-points to a large number of wind turbines arranged within close vicinity of each other in a wind farm. The goal is to maximize the total electric power extracted from the wind, taking the wake effects that couple the individual turbines in the farm into a...... is far superior to, a more naive distribution scheme. We employ a fast convex quadratic programming solver to carry out the iterations in the range of microseconds for even large wind farms....

  19. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  20. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation

    International Nuclear Information System (INIS)

    Schranz, C; Möller, K; Becher, T; Schädler, D; Weiler, N

    2014-01-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (p I ), inspiration and expiration time (t I , t E ) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal p I and adequate t E can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's ‘optimized’ settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end

  1. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.

    Science.gov (United States)

    Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K

    2014-03-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.

  2. MIN-CUT BASED SEGMENTATION OF AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    S. Ural

    2012-07-01

    Full Text Available Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance

  3. Youth Sports Clubs' Potential as Health-Promoting Setting: Profiles, Motives and Barriers

    Science.gov (United States)

    Meganck, Jeroen; Scheerder, Jeroen; Thibaut, Erik; Seghers, Jan

    2015-01-01

    Setting and Objective: For decades, the World Health Organisation has promoted settings-based health promotion, but its application to leisure settings is minimal. Focusing on organised sports as an important leisure activity, the present study had three goals: exploring the health promotion profile of youth sports clubs, identifying objective…

  4. Laser radiation in tennis elbow treatment: a new minimally invasive alternative

    Science.gov (United States)

    Paganini, Stefan; Thal, Dietmar R.; Werkmann, Klaus

    1998-01-01

    The epicondylitis humeri radialis (EHR) (tennis elbow), is a common disease in elbow joint pain syndromes. We treated patients with chronic pain for at least one year and no improvement with conservative or operative therapies with a new minimal invasive method, the EHR-Laser radiation (EHR- LR). With this method periepicondylar coagulations were applied to the trigger points of the patients. For this the previously established technique of facet joint coagulation with the Nd:Yag-laser was modified. In a follow-up study of between 6 weeks and 2 years all patients reported either a significant pain reduction or were symptom free. EHR-LR is a new method situated between conservative and surgical treatments for minimal invasive therapy of EHR. Several therapeutic rationales were discussed for the resulting pain reduction.

  5. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  6. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  7. Pointing control for LDR

    Science.gov (United States)

    Yam, Y.; Briggs, C.

    1988-01-01

    One important aspect of the LDR control problem is the possible excitations of structural modes due to random disturbances, mirror chopping, and slewing maneuvers. An analysis was performed to yield a first order estimate of the effects of such dynamic excitations. The analysis involved a study of slewing jitters, chopping jitters, disturbance responses, and pointing errors, making use of a simplified planar LDR model which describes the LDR dynamics on a plane perpendicular to the primary reflector. Briefly, the results indicate that the command slewing profile plays an important role in minimizing the resultant jitter, even to a level acceptable without any control action. An optimal profile should therefore be studied.

  8. Active point out-of-plane ultrasound calibration

    Science.gov (United States)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  9. Optimizing the process of making sweet wines to minimize the content of ochratoxin A.

    Science.gov (United States)

    Ruíz Bejarano, M Jesús; Rodríguez Dodero, M Carmen; García Barroso, Carmelo

    2010-12-22

    During the drying process of raisins, the grapes are subjected to climatic variations that can result in heavy infections of some fungal species that produce ochratoxin A (OTA), a powerful toxic metabolite, whose maximum permitted content is set by the European Union at 2.0 μg/L for grapes, wine and other drinks derived from the grape. The aim of this paper is to optimize the process of making sweet wines in order to minimize the content of ochratoxin A. The results reflect a reduction of the OTA content in grapes dried under controlled conditions in a climatic chamber up to 24% compared to those sunned in the traditional way. A decrease of the concentrations of OTA is also observed during the processes of vinification. Those wines with prefermentative maceration reached a higher OTA content than the wines without maceration, but unexpectedly were not those preferred from a sensorial point of view. In addition, the process of aging in oak casks has been shown to serve as a natural method for reducing the OTA content of these wines.

  10. Robust point matching via vector field consensus.

    Science.gov (United States)

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  11. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  12. Hexavalent Chromium Minimization Strategy

    Science.gov (United States)

    2011-05-01

    Logistics 4 Initiative - DoD Hexavalent Chromium Minimization Non- Chrome Primer IIEXAVAJ ENT CHRO:M I~UMI CHROMIUM (VII Oil CrfVli.J CANCEfl HAnRD CD...Management Office of the Secretary of Defense Hexavalent Chromium Minimization Strategy Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2011 4. TITLE AND SUBTITLE Hexavalent Chromium Minimization Strategy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  13. Topological fixed point theory of multivalued mappings

    CERN Document Server

    Górniewicz, Lech

    1999-01-01

    This volume presents a broad introduction to the topological fixed point theory of multivalued (set-valued) mappings, treating both classical concepts as well as modern techniques. A variety of up-to-date results is described within a unified framework. Topics covered include the basic theory of set-valued mappings with both convex and nonconvex values, approximation and homological methods in the fixed point theory together with a thorough discussion of various index theories for mappings with a topologically complex structure of values, applications to many fields of mathematics, mathematical economics and related subjects, and the fixed point approach to the theory of ordinary differential inclusions. The work emphasises the topological aspect of the theory, and gives special attention to the Lefschetz and Nielsen fixed point theory for acyclic valued mappings with diverse compactness assumptions via graph approximation and the homological approach. Audience: This work will be of interest to researchers an...

  14. Las Matematicas: Lenguaje Universal. Grados Intermedios, Nivel 5a: Geometria - Conjuntos de Puntos (Mathematics: A Universal Language. Intermediate Grades, Level 5a: Geometry - Sets of Points).

    Science.gov (United States)

    Dissemination and Assessment Center for Bilingual Education, Austin, TX.

    This is one of a series of student booklets designed for use in a bilingual mathematics program in grades 6-8. The general format is to present each page in both Spanish and English. The mathematical topics in the booklet include points, lines, planes, space, angles, and intersection and union of sets. (MK)

  15. International urinary tract imaging basic spinal cord injury data set

    DEFF Research Database (Denmark)

    Biering-Sørensen, F; Craggs, M; Kennelly, M

    2008-01-01

    OBJECTIVE: To create an International Urinary Tract Imaging Basic Spinal Cord Injury (SCI) Data Set within the framework of the International SCI Data Sets. SETTING: An international working group. METHODS: The draft of the Data Set was developed by a working group comprising members appointed...... of comparable minimal data. RESULTS: The variables included in the International Urinary Tract Imaging Basic SCI Data Set are the results obtained using the following investigations: intravenous pyelography or computer tomography urogram or ultrasound, X-ray, renography, clearance, cystogram, voiding cystogram...

  16. Measuring the exhaust gas dew point of continuously operated combustion plants

    Energy Technology Data Exchange (ETDEWEB)

    Fehler, D.

    1985-07-16

    Low waste-gas temperatures represent one means of minimizing the energy consumption of combustion facilities. However, condensation should be prevented to occur in the waste gas since this could result in a destruction of parts. Measuring the waste-gas dew point allows to control combustion parameters in such a way as to be able to operate at low temperatures without danger of condensation. Dew point sensors will provide an important signal for optimizing combustion facilities.

  17. Millicharge or decay: a critical take on Minimal Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Nobile, Eugenio Del [Department of Physics and Astronomy, UCLA, 475 Portola Plaza, Los Angeles, CA 90095 (United States); Dipartimento di Fisica e Astronomia “G. Galilei”, Università di Padova and INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Nardecchia, Marco [DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Panci, Paolo [Institut d’Astrophysique de Paris, UMR 7095 CNRS, Université Pierre et Marie Curie, 98 bis Boulevard Arago, Paris 75014 (France)

    2016-04-26

    Minimal Dark Matter (MDM) is a theoretical framework highly appreciated for its minimality and yet its predictivity. Of the two only viable candidates singled out in the original analysis, the scalar eptaplet has been found to decay too quickly to be around today, while the fermionic quintuplet is now being probed by indirect Dark Matter (DM) searches. It is therefore timely to critically review the MDM paradigm, possibly pointing out generalizations of this framework. We propose and explore two distinct directions. One is to abandon the assumption of DM electric neutrality in favor of absolutely stable, millicharged DM candidates which are part of SU(2){sub L} multiplets with integer isospin. Another possibility is to lower the cutoff of the model, which was originally fixed at the Planck scale, to allow for DM decays. We find new viable MDM candidates and study their phenomenology in detail.

  18. Modeling fixation locations using spatial point processes.

    Science.gov (United States)

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  19. Reach-to-grasp movement as a minimization process.

    Science.gov (United States)

    Yang, Fang; Feldman, Anatol G

    2010-02-01

    It is known that hand transport and grasping are functionally different but spatially coordinated components of reach-to-grasp (RTG) movements. As an extension of this notion, we suggested that body segments involved in RTG movements are controlled as a coherent ensemble by a global minimization process associated with the necessity for the hand to reach the motor goal. Different RTG components emerge following this process without pre-programming. Specifically, the minimization process may result from the tendency of neuromuscular elements to diminish the spatial gap between the actual arm-hand configuration and its virtual (referent) configuration specified by the brain. The referent configuration is specified depending on the object shape, localization, and orientation. Since the minimization process is gradual, it can be interrupted and resumed following mechanical perturbations, at any phase during RTG movements, including hand closure. To test this prediction of the minimization hypothesis, we asked subjects to reach and grasp a cube placed within the reach of the arm. Vision was prevented during movement until the hand returned to its initial position. As predicted, by arresting wrist motion at different points of hand transport in randomly selected trials, it was possible to halt changes in hand aperture at any phase, not only during hand opening but also during hand closure. Aperture changes resumed soon after the wrist was released. Another test of the minimization hypothesis was made in RTG movements to an object placed beyond the reach of the arm. It has previously been shown (Rossi et al. in J Physiol 538:659-671, 2002) that in such movements, the trunk motion begins to contribute to hand transport only after a critical phase when the shifts in the referent arm configuration have finished (at about the time when hand velocity is maximal). The minimization rule suggests that when the virtual contribution of the arm to hand transport is completed

  20. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.